summaryrefslogtreecommitdiff
path: root/doc/todo
diff options
context:
space:
mode:
authorGravatar Joey Hess <id@joeyh.name>2014-12-03 14:02:29 -0400
committerGravatar Joey Hess <id@joeyh.name>2014-12-03 14:10:52 -0400
commit69957946eaa066406a243edca8fd3e19e7febfee (patch)
tree7ce300577cd986f4f03b5f81446a188916e75097 /doc/todo
parentab9bb79e8f0eaa8d951d46e82b321f8511ded942 (diff)
parent718932c895b38228ab8aed4477d7ce8bba205e5a (diff)
Merge branch 's3-aws'
Diffstat (limited to 'doc/todo')
-rw-r--r--doc/todo/S3_multipart_interruption_cleanup.mdwn14
1 files changed, 14 insertions, 0 deletions
diff --git a/doc/todo/S3_multipart_interruption_cleanup.mdwn b/doc/todo/S3_multipart_interruption_cleanup.mdwn
new file mode 100644
index 000000000..adb5fd2cb
--- /dev/null
+++ b/doc/todo/S3_multipart_interruption_cleanup.mdwn
@@ -0,0 +1,14 @@
+When a multipart S3 upload is being made, and gets interrupted,
+the parts remain in the bucket, and S3 may charge for them.
+
+I am not sure what happens if the same object gets uploaded again. Is S3
+nice enough to remove the old parts? I need to find out..
+
+If not, this needs to be dealt with somehow. One way would be to configure an
+expiry of the uploaded parts, but this is tricky as a huge upload could
+take arbitrarily long. Another way would be to record the uploadid and the
+etags of the parts, and then resume where it left off the next time the
+object is sent to S3. (Or at least cancel the old upload; resume isn't
+practical when uploading an encrypted object.)
+
+It could store that info in either the local FS or the git-annex branch.