summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGravatar http://digiuser.livejournal.com/ <http://digiuser.livejournal.com/@web>2014-12-26 06:28:26 +0000
committerGravatar admin <admin@branchable.com>2014-12-26 06:28:26 +0000
commitc590e65b1da4a018937655c39a37c9eb05e3405e (patch)
tree447c991e10732d9f5ad0c9036bf89ca98b5f296b
parent646dff0fcbb4f4e58b0988d77083ec1bf50bd3e2 (diff)
-rw-r--r--doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn37
1 files changed, 37 insertions, 0 deletions
diff --git a/doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn b/doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn
new file mode 100644
index 000000000..b5b38ce3c
--- /dev/null
+++ b/doc/forum/is_there_a_way_to_automatically_retry_when_special_remotes_fail__63__.mdwn
@@ -0,0 +1,37 @@
+I'm storing hundreds of gigabytes of data on a S3 remote, and often when I try to copy to my remote using this type of command:
+
+ git annex copy newdir/* --to my-s3-remote
+
+I'll get a little bit of the way uploading some large file (which is in chunks) and then something like this:
+
+ copy newdir/file1.tgz (gpg) (checking my-s3-remote...) (to my-s3-remote...)
+
+ 3% 2.2MB/s 11h14m
+
+ ErrorMisc "<socket: 16>: Data.ByteString.hGetLine: timeout (Operation timed out)"
+
+ failed
+
+ copy newdir/file2.tgz (checking my-s3-remote...) (to my-s3-remote...)
+
+ 15% 2.3MB/s 3h40m
+
+ ErrorMisc "<socket: 16>: Data.ByteString.hGetLine: resource vanished (Connection reset by peer)"
+
+ failed
+
+ copy newdir/file3.tgz (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) (checking my-s3-remote...) ok
+
+One common cause of this is if my Internet connection is intermittent. But even when my connection seems steady, it can happen. I'm willing to chalk that up to network problems elsewhere though.
+
+If I keep just hitting "up enter" to re-execute the command each time it fails, eventually everything gets up there.
+
+But this can actually take weeks, because often uploading these big files, I'll let it go overnight, and then wake up every morning and find out with dismay that it has failed again.
+
+My questions:
+
+- Is there a way to make it automatically retry? I am sure that upon any of these errors, an immediate automatic reply would amost assuredly work.
+
+- If not, is there at least a way to make it pick up where it left off? Even though I'm using chunks, it seems to start the file over again.
+
+Thanks.