summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGravatar Joey Hess <joeyh@joeyh.name>2015-05-28 11:54:45 -0400
committerGravatar Joey Hess <joeyh@joeyh.name>2015-05-28 11:54:45 -0400
commit219725b5ff610ce9fefd584c1b1b5e8086fc203b (patch)
treed0429652c7c9b6037ec3420f80c5c9d47d841415
parentd32c379d09f1648f27933cd33fd2a4b90f3efe44 (diff)
parent56d65a425c9461c88587d070cc732dc49a1d674f (diff)
Merge branch 'master' of ssh://git-annex.branchable.com5.20150528
-rw-r--r--doc/bugs/git_annex_fsck_on_btrfs_devices/comment_3_dfcef745c92ec629f82ec6acc14d1519._comment9
-rw-r--r--doc/devblog/day_288__microrelease_prep/comment_1_99e359976acbdb9727598f8a87027de0._comment7
-rw-r--r--doc/devblog/day_288__microrelease_prep/comment_2_27a74d3926083a00c110aafea28420d7._comment7
-rw-r--r--doc/direct_mode/comment_19_cdf3062fb82078ad5677b82dc5933560._comment13
-rw-r--r--doc/forum/Proper_usage_of_git_annex_proxy_to_mimc_undo_--depth.mdwn15
-rw-r--r--doc/forum/how_do_automated_upgrades_work__63__.mdwn11
-rw-r--r--doc/tips/publishing_your_files_to_the_public.mdwn13
-rw-r--r--doc/tips/publishing_your_files_to_the_public/comment_5_29c3ee4aed6a5b53b6767a96a7b85ad9._comment7
-rw-r--r--doc/todo/credentials-less_access_to_s3.mdwn11
-rw-r--r--doc/todo/find_unused_in_any_commit/comment_2_ab373440bf7bab9179fdfccf6da3e8a4._comment7
10 files changed, 87 insertions, 13 deletions
diff --git a/doc/bugs/git_annex_fsck_on_btrfs_devices/comment_3_dfcef745c92ec629f82ec6acc14d1519._comment b/doc/bugs/git_annex_fsck_on_btrfs_devices/comment_3_dfcef745c92ec629f82ec6acc14d1519._comment
new file mode 100644
index 000000000..ec277f9c2
--- /dev/null
+++ b/doc/bugs/git_annex_fsck_on_btrfs_devices/comment_3_dfcef745c92ec629f82ec6acc14d1519._comment
@@ -0,0 +1,9 @@
+[[!comment format=mdwn
+ username="eigengrau"
+ subject="comment 3"
+ date="2015-05-28T13:09:23Z"
+ content="""
+Thanks! If it’s just for one file, it’s probably okay to move it to bad. If the error was intermittent, one can try reinjecting the content.
+
+As for the risk of overkill, I don’t know enough about how the SATA/SCSI subsystem handles things. The cornercase would be one where (say due to EM interference) the SATA connection is reset and the device driver reports read errors for lots and lots of files, but the drive comes back in time so that these files are erroneously moved to bad. However, I guess you do the “move to bad” action file by file, and the whole fsck fails if moving to bad fails. In this case, we probably wouldn’t bite the cornercase, because when the drive comes back online, at most one file is moved to “bad“ erroneously.
+"""]]
diff --git a/doc/devblog/day_288__microrelease_prep/comment_1_99e359976acbdb9727598f8a87027de0._comment b/doc/devblog/day_288__microrelease_prep/comment_1_99e359976acbdb9727598f8a87027de0._comment
new file mode 100644
index 000000000..3f849e73e
--- /dev/null
+++ b/doc/devblog/day_288__microrelease_prep/comment_1_99e359976acbdb9727598f8a87027de0._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ username="https://id.koumbit.net/anarcat"
+ subject="bup-cron has something"
+ date="2015-05-27T23:32:18Z"
+ content="""
+so the bup people has a little [bup-damage](https://github.com/bup/bup/blob/master/cmd/damage-cmd.py) command to insert noise into files. it doesn't simulate I/O problems per se, but maybe it could help? --[[anarcat]]
+"""]]
diff --git a/doc/devblog/day_288__microrelease_prep/comment_2_27a74d3926083a00c110aafea28420d7._comment b/doc/devblog/day_288__microrelease_prep/comment_2_27a74d3926083a00c110aafea28420d7._comment
new file mode 100644
index 000000000..461a86e8a
--- /dev/null
+++ b/doc/devblog/day_288__microrelease_prep/comment_2_27a74d3926083a00c110aafea28420d7._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ username="namelessjon"
+ subject="comment 2"
+ date="2015-05-28T10:14:45Z"
+ content="""
+https://stackoverflow.com/questions/1870696/simulate-a-faulty-block-device-with-read-errors ?
+"""]]
diff --git a/doc/direct_mode/comment_19_cdf3062fb82078ad5677b82dc5933560._comment b/doc/direct_mode/comment_19_cdf3062fb82078ad5677b82dc5933560._comment
deleted file mode 100644
index 1b82f87d1..000000000
--- a/doc/direct_mode/comment_19_cdf3062fb82078ad5677b82dc5933560._comment
+++ /dev/null
@@ -1,13 +0,0 @@
-[[!comment format=mdwn
- username="mitzip"
- subject="comment 19"
- date="2015-05-27T20:20:11Z"
- content="""
-Thanks for correcting that, and thanks for the git-revert suggestion!
-
-I have a question about the usage of git-revert for my purposes. I'm wanting to bring back a version of a file at a certain commit (not the whole commit) and I found this in the git docs...
-
->Note: git revert is used to record some new commits to reverse the effect of some earlier commits (often only a faulty one). If you want to throw away all uncommitted changes in your working directory, you should see git-reset[1], particularly the --hard option. If you want to extract specific files as they were in another commit, you should see git-checkout[1], specifically the git checkout <commit> -- <filename> syntax. Take care with these alternatives as both will discard uncommitted changes in your working directory.
-
-That being said, should I still use `git revert` instead of `git checkout` because `git revert` will take care of making the new commit for me?
-"""]]
diff --git a/doc/forum/Proper_usage_of_git_annex_proxy_to_mimc_undo_--depth.mdwn b/doc/forum/Proper_usage_of_git_annex_proxy_to_mimc_undo_--depth.mdwn
new file mode 100644
index 000000000..1f15d6655
--- /dev/null
+++ b/doc/forum/Proper_usage_of_git_annex_proxy_to_mimc_undo_--depth.mdwn
@@ -0,0 +1,15 @@
+[Thanks Joey for correcting the docs on `git annex undo --depth`, and thanks for the `git-revert` suggestion as a replacement!](http://git-annex.branchable.com/direct_mode/#comment-b6dcfc80842008e7f9f5b8f612b27867)
+
+**Context: Creating an OSX GUI for assistant managed direct mode repos to help with restoring old file versions.**
+
+I saw this in the `git revert` docs and thought that `git annex proxy -- git checkout annex/direct/master~$depth -- $filename` might best suit my needs of restoring a previous version of a file. (I liked the idea of presenting the user with a depth rather than a hash.)
+
+>Note: git revert is used to record some new commits to reverse the effect of some earlier commits (often only a faulty one). If you want to throw away all uncommitted changes in your working directory, you should see git-reset[1], particularly the --hard option. If you want to extract specific files as they were in another commit, you should see git-checkout[1], specifically the git checkout -- syntax. Take care with these alternatives as both will discard uncommitted changes in your working directory.
+
+What I've found is that your suggestion of `git revert` is nice because it wouldn't create a conflict, as `git checkout` does.
+
+So annex, thorough as it is, creates a `$filename.variant-local.$ext` file after the auto conflict resolution to preserve the original. `git revert` is neater, history wise, because there is no conflict as git knows exactly what's changing and from whence it came, rather than just some new file content showing up from who knows where with `git checkout`.
+
+The issue it seems is that `git revert` works on a commit basis, while `git checkout` can operate on files. If I'm correct in this it would be good to know if annex uses one commit per file, for sure, every time? If this is the case, there would be no problem using the better in most every other way `git revert`.
+
+Though, I'm still not clear how to use the "depth" referencing with `git revert` rather than hashes, any suggestions?
diff --git a/doc/forum/how_do_automated_upgrades_work__63__.mdwn b/doc/forum/how_do_automated_upgrades_work__63__.mdwn
new file mode 100644
index 000000000..1f404d144
--- /dev/null
+++ b/doc/forum/how_do_automated_upgrades_work__63__.mdwn
@@ -0,0 +1,11 @@
+When i start the assistant, it's nicely telling me:
+
+<pre>
+[2015-05-27 20:15:20 UTC] Upgrader: An upgrade of git-annex is available. (version 5.20150522)
+</pre>
+
+That's really cool, but it's not actually upgrading. I looked around the website to understand how that worked and i found [[git-annex-upgrade]] and [[upgrades]] but those pages were not really useful, as they talk more about repository upgrades, and not the automated upgrade system. I was expecting [[upgrades]] to talk a bit about automated upgrades, or maybe the [[install]] page...
+
+i am running `5.20150508-g883d57f`, with a standalone image installed by root in `/opt`. Should that directory be writable by the user running git-annex to solve this?
+
+Thanks! --[[anarcat]]
diff --git a/doc/tips/publishing_your_files_to_the_public.mdwn b/doc/tips/publishing_your_files_to_the_public.mdwn
index ae65263a7..d2c074503 100644
--- a/doc/tips/publishing_your_files_to_the_public.mdwn
+++ b/doc/tips/publishing_your_files_to_the_public.mdwn
@@ -61,3 +61,16 @@ To enable use a private S3 bucket for the remotes and then pre-sign actual URL w
Example:
key=`git annex lookupkey "$fname"`; sign_s3_url.bash --region 'eu-west-1' --bucket 'mybuck' --file-path $key --aws-access-key-id XX --aws-secret-access-key XX --method 'GET' --minute-expire 10
+
+## Adding the S3 URL as a source
+
+Assuming all files in the current directory are available on S3, this will register the public S3 url for the file in git-annex, making it available for everyone *through git-annex*:
+
+<pre>
+git annex find --in public-s3 | while read file ; do
+ key=$(git annex lookupkey $file)
+ echo $key https://public-annex.s3.amazonaws.com/$key
+done | git annex registerurl
+</pre>
+
+`registerurl` was introduced in `5.20150317`. There's a todo open to ensure we don't have to do this by hand: [[todo/credentials-less access to s3]].
diff --git a/doc/tips/publishing_your_files_to_the_public/comment_5_29c3ee4aed6a5b53b6767a96a7b85ad9._comment b/doc/tips/publishing_your_files_to_the_public/comment_5_29c3ee4aed6a5b53b6767a96a7b85ad9._comment
new file mode 100644
index 000000000..bd77d03ce
--- /dev/null
+++ b/doc/tips/publishing_your_files_to_the_public/comment_5_29c3ee4aed6a5b53b6767a96a7b85ad9._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ username="https://id.koumbit.net/anarcat"
+ subject="comment 5"
+ date="2015-05-27T21:50:10Z"
+ content="""
+[registerurl](http://source.git-annex.branchable.com/?p=source.git;a=blobdiff;f=doc/git-annex.mdwn;h=c33633e03378b0125a3feb5d1a9fa61ce9bfa9cc;hp=3af9bbb8c1d2e84506f3db80ad7253a7cd8de1d4;hb=abfe3c09b2caac0827a2196076c9bd9185451b9f;hpb=b24bb6b435ddc91510163c7b22db2ba52703724c) may provide a faster version of the above. i've also creted a [[feature request|todo/credentials-less_access_to_s3]] to make this easier with s3 (so that we don't have to setup urls for each individual file). --[[anarcat]]
+"""]]
diff --git a/doc/todo/credentials-less_access_to_s3.mdwn b/doc/todo/credentials-less_access_to_s3.mdwn
new file mode 100644
index 000000000..39835ac1f
--- /dev/null
+++ b/doc/todo/credentials-less_access_to_s3.mdwn
@@ -0,0 +1,11 @@
+My situation is this: while i know i can *read and write* to [[special_remotes/S3]] fairly easily with the credentials, I cannot read from there from other remotes that do not have those credentials enabled.
+
+This seems to be an assumption deeply rooted in git-annex, specifically in `Remote/S3.hs:390`.
+
+It would be *very* useful to allow remotes to read from S3 transparently. I am aware of the tips mentionned in the comments of [[tips/publishing_your_files_to_the_public/]] that use the `addurl` hack, but this seems not only counter-intuitive, but also seem to add significant per-file overhead in the storage. It also requires running an extra command after every `git annex add` which is a problem if you are running the assistant that will add stuff behind your back.
+
+Besides, you never know if and when the file really is available on s3, so running addurl isn't necessarily accurate.
+
+How hard would it be to fix that in the s3 remote?
+
+Thanks! --[[anarcat]]
diff --git a/doc/todo/find_unused_in_any_commit/comment_2_ab373440bf7bab9179fdfccf6da3e8a4._comment b/doc/todo/find_unused_in_any_commit/comment_2_ab373440bf7bab9179fdfccf6da3e8a4._comment
new file mode 100644
index 000000000..8d489927f
--- /dev/null
+++ b/doc/todo/find_unused_in_any_commit/comment_2_ab373440bf7bab9179fdfccf6da3e8a4._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ username="eigengrau"
+ subject="comment 2"
+ date="2015-05-28T13:44:32Z"
+ content="""
+This would be absolutely awesome, because it would pruning away old data based on cut-offs. One could squash all history beyond some cut-off point. Or, probably better, one could preserve git history, but supply “git annex fsck” with a cut-off switch that specifies a date or time interval. All data referred to in commits older than the specified interval is then considered unused.
+"""]]