summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Remote/Helper/Special.hs8
-rw-r--r--debian/changelog2
-rw-r--r--doc/forum/s3_special_remote_does_not_resume_uploads_even_with_new_chunking/comment_4_bd631d470ee0365a11483c9a2e563b32._comment30
3 files changed, 37 insertions, 3 deletions
diff --git a/Remote/Helper/Special.hs b/Remote/Helper/Special.hs
index 483ef576e..956d48273 100644
--- a/Remote/Helper/Special.hs
+++ b/Remote/Helper/Special.hs
@@ -184,12 +184,14 @@ specialRemote' cfg c preparestorer prepareretriever prepareremover preparecheckp
-- chunk, then encrypt, then feed to the storer
storeKeyGen k f p enc = safely $ preparestorer k $ safely . go
where
- go (Just storer) = sendAnnex k rollback $ \src ->
+ go (Just storer) = preparecheckpresent k $ safely . go' storer
+ go Nothing = return False
+ go' storer (Just checker) = sendAnnex k rollback $ \src ->
displayprogress p k f $ \p' ->
storeChunks (uuid baser) chunkconfig k src p'
(storechunk enc storer)
- (checkPresent baser)
- go Nothing = return False
+ checker
+ go' _ Nothing = return False
rollback = void $ removeKey encr k
storechunk Nothing storer k content p = storer k content p
diff --git a/debian/changelog b/debian/changelog
index d8e468dc8..e494f9ced 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -3,6 +3,8 @@ git-annex (5.20150714) UNRELEASED; urgency=medium
* Improve bash completion code so that "git annex" will also tab
complete. However, git's bash completion script needs a patch,
which I've submitted, for this to work prefectly.
+ * Fix bug that prevented uploads to remotes using new-style chunking
+ from resuming after the last successfully uploaded chunk.
-- Joey Hess <id@joeyh.name> Thu, 16 Jul 2015 14:55:07 -0400
diff --git a/doc/forum/s3_special_remote_does_not_resume_uploads_even_with_new_chunking/comment_4_bd631d470ee0365a11483c9a2e563b32._comment b/doc/forum/s3_special_remote_does_not_resume_uploads_even_with_new_chunking/comment_4_bd631d470ee0365a11483c9a2e563b32._comment
new file mode 100644
index 000000000..ecac7917c
--- /dev/null
+++ b/doc/forum/s3_special_remote_does_not_resume_uploads_even_with_new_chunking/comment_4_bd631d470ee0365a11483c9a2e563b32._comment
@@ -0,0 +1,30 @@
+[[!comment format=mdwn
+ username="joey"
+ subject="""comment 4"""
+ date="2015-07-16T17:57:44Z"
+ content="""
+This should have been filed as a bug report... I will move the thread to
+bugs after posting this comment.
+
+In your obfuscated log, it tries to HEAD GPGHMACSHA1--1111111111
+and when that fails, it PUTs GPGHMACSHA1--2222222222. From this, we can
+deduce that GPGHMACSHA1--1111111111 is not the first chunk, but is the full
+non-chunked file, and GPGHMACSHA1--2222222222 is actually the first chunk.
+
+For testing, I modifed the S3 remote to make file uploads succeed, but then
+report to git-annex that they failed. So, git annex copy uploads the 1st
+chunk and then fails, same as it was interrupted there. Repeating the copy,
+I see the same thing; it HEADs the full key, does not HEAD the first chunk,
+and so doesn't notice it was uploaded before, and so re-uploads the first
+chunk.
+
+The HEAD of the full key is just done for backwards compatability reasons.
+The problem is that it's not checking if the current chunk it's gonna
+upload is present in the remote. But, there is code in seekResume that
+is supposed to do that very check: `tryNonAsync (checker k)`
+
+Aha, the problem seems to be in the checkpresent action that's passed to
+that. Looks like it's passing in a dummy checkpresent action.
+
+I've fixed this in git, and now it resumes properly in my test case.
+"""]]