summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGravatar Joey Hess <joeyh@joeyh.name>2014-12-28 14:32:29 -0400
committerGravatar Joey Hess <joeyh@joeyh.name>2014-12-28 14:32:29 -0400
commitd0b9a83e8556b995d2035b2687fdd3507f8d7b86 (patch)
treefaf4c31b101c009d806449737e352852c22b7ceb
parent4c4e7852d8affeb4f57a16c5b21c0e35bb568a94 (diff)
parent320b552720444b22b707931ea4b755c7f3289bbf (diff)
Merge branch 'master' of ssh://git-annex.branchable.com
-rw-r--r--doc/design/assistant/blog/day_12__freebsd_redux/comment_3_5ab7808595e3b51ca4141d15fdd44743._comment7
-rw-r--r--doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn37
-rw-r--r--doc/forum/remote_server_client_repositories_are_bare__33____63__/comment_4_cd04cfaf97f200d5e581b83bb8d018b2._comment26
-rw-r--r--doc/tips/publishing_your_files_to_the_public.mdwn2
-rw-r--r--doc/todo/Slow_transfer_for_a_lot_of_small_files./comment_2_80d1080bf6e82bd8aaccde9d7c1669c7._comment8
5 files changed, 79 insertions, 1 deletions
diff --git a/doc/design/assistant/blog/day_12__freebsd_redux/comment_3_5ab7808595e3b51ca4141d15fdd44743._comment b/doc/design/assistant/blog/day_12__freebsd_redux/comment_3_5ab7808595e3b51ca4141d15fdd44743._comment
new file mode 100644
index 000000000..6bcba4d3e
--- /dev/null
+++ b/doc/design/assistant/blog/day_12__freebsd_redux/comment_3_5ab7808595e3b51ca4141d15fdd44743._comment
@@ -0,0 +1,7 @@
+[[!comment format=mdwn
+ username="chris"
+ subject="Thanks"
+ date="2014-12-25T11:58:55Z"
+ content="""
+Thanks
+"""]]
diff --git a/doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn b/doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn
new file mode 100644
index 000000000..b5b38ce3c
--- /dev/null
+++ b/doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn
@@ -0,0 +1,37 @@
+I'm storing hundreds of gigabytes of data on a S3 remote, and often when I try to copy to my remote using this type of command:
+
+ git annex copy newdir/* --to my-s3-remote
+
+I'll get a little bit of the way uploading some large file (which is in chunks) and then something like this:
+
+ copy newdir/file1.tgz (gpg) (checking my-s3-remote...) (to my-s3-remote...)
+
+ 3% 2.2MB/s 11h14m
+
+ ErrorMisc "<socket: 16>: Data.ByteString.hGetLine: timeout (Operation timed out)"
+
+ failed
+
+ copy newdir/file2.tgz (checking my-s3-remote...) (to my-s3-remote...)
+
+ 15% 2.3MB/s 3h40m
+
+ ErrorMisc "<socket: 16>: Data.ByteString.hGetLine: resource vanished (Connection reset by peer)"
+
+ failed
+
+ copy newdir/file3.tgz (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) ok
+
+One common cause of this is if my Internet connection is intermittent. But even when my connection seems steady, it can happen. I'm willing to chalk that up to network problems elsewhere though.
+
+If I keep just hitting "up enter" to re-execute the command each time it fails, eventually everything gets up there.
+
+But this can actually take weeks, because often uploading these big files, I'll let it go overnight, and then wake up every morning and find out with dismay that it has failed again.
+
+My questions:
+
+- Is there a way to make it automatically retry? I am sure that upon any of these errors, an immediate automatic reply would amost assuredly work.
+
+- If not, is there at least a way to make it pick up where it left off? Even though I'm using chunks, it seems to start the file over again.
+
+Thanks.
diff --git a/doc/forum/remote_server_client_repositories_are_bare__33____63__/comment_4_cd04cfaf97f200d5e581b83bb8d018b2._comment b/doc/forum/remote_server_client_repositories_are_bare__33____63__/comment_4_cd04cfaf97f200d5e581b83bb8d018b2._comment
new file mode 100644
index 000000000..5213a7f95
--- /dev/null
+++ b/doc/forum/remote_server_client_repositories_are_bare__33____63__/comment_4_cd04cfaf97f200d5e581b83bb8d018b2._comment
@@ -0,0 +1,26 @@
+[[!comment format=mdwn
+ username="https://www.google.com/accounts/o8/id?id=AItOawkRW96vF6lsjg57muQ4nPnQqJJUAKGKGzw"
+ nickname="Catalin"
+ subject="Caveat with 'git checkout synced/master'"
+ date="2014-12-26T07:43:18Z"
+ content="""
+There's at least one caveat with the 'git checkout synced/master' workaround. When the local assistant next tries to sync with the remote, it will try to push, and the remote will refuse the push into the currently checked out branch:
+
+[2014-12-26 02:40:10 EST] main: Syncing with nyc.nanobit.org_annex
+remote: error: refusing to update checked out branch: refs/heads/synced/master
+remote: error: By default, updating the current branch in a non-bare repository
+remote: error: is denied, because it will make the index and work tree inconsistent
+remote: error: with what you pushed, and will require 'git reset --hard' to match
+remote: error: the work tree to HEAD.
+remote: error:
+remote: error: You can set 'receive.denyCurrentBranch' configuration variable to
+remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into
+remote: error: its current branch; however, this is not recommended unless you
+remote: error: arranged to update its work tree to match what you pushed in some
+remote: error: other way.
+remote: error:
+remote: error: To squelch this message and still keep the default behaviour, set
+remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
+To ssh://catalinp@git-annex-nyc.nanobit.org-catalinp_22_annex/~/annex/
+ ! [remote rejected] annex/direct/master -> synced/master (branch is currently checked out)
+"""]]
diff --git a/doc/tips/publishing_your_files_to_the_public.mdwn b/doc/tips/publishing_your_files_to_the_public.mdwn
index fc054370f..3845ae3e9 100644
--- a/doc/tips/publishing_your_files_to_the_public.mdwn
+++ b/doc/tips/publishing_your_files_to_the_public.mdwn
@@ -46,7 +46,7 @@ To share all the links in a given folder, for example, you can go to that folder
The same applies to all the filters you can do with git-annex.
For example, let's share links to all the files whose _author_'s name starts with "Mario" and are, in fact, stored at your public-s3 remote.
-However, instead of just a list of links we will output a markdown-formatted list of the filenames linked to their S3 files:
+However, instead of just a list of links we will output a markdown-formatted list of the filenames linked to their S3 urls:
for filename in (git annex find --metadata "author=Mario*" --and --in public-s3)
echo "* ["$filename"](https://public-annex.s3.amazonaws.com/"(git annex lookupkey $filename)")"
diff --git a/doc/todo/Slow_transfer_for_a_lot_of_small_files./comment_2_80d1080bf6e82bd8aaccde9d7c1669c7._comment b/doc/todo/Slow_transfer_for_a_lot_of_small_files./comment_2_80d1080bf6e82bd8aaccde9d7c1669c7._comment
new file mode 100644
index 000000000..d531914ab
--- /dev/null
+++ b/doc/todo/Slow_transfer_for_a_lot_of_small_files./comment_2_80d1080bf6e82bd8aaccde9d7c1669c7._comment
@@ -0,0 +1,8 @@
+[[!comment format=mdwn
+ username="https://launchpad.net/~krastanov-stefan"
+ nickname="krastanov-stefan"
+ subject="Status of this issue"
+ date="2014-12-27T15:18:42Z"
+ content="""
+I was unable to find a way to tell git-annex that certain remotes should receive multiple transfers in parallel. Is this implemented yet or on the roadmap? If neither would modifying the webapp to bear this logic without touching git-annex itself be a solution (asking mainly because it can be done with a greasemonkey script)?
+"""]]