summaryrefslogtreecommitdiff
path: root/doc/bugs/S3_upload_not_using_multipart
diff options
context:
space:
mode:
authorGravatar Joey Hess <joeyh@joeyh.name>2016-04-19 13:46:11 -0400
committerGravatar Joey Hess <joeyh@joeyh.name>2016-04-19 13:55:29 -0400
commitdbfc00e2a6ad99200da35f75f889174cd7bfd195 (patch)
tree9d8a9d1353ab10376183c1bda881fface04b6fcb /doc/bugs/S3_upload_not_using_multipart
parent41b7950285ef1e91b80c441c2be68a1ef4d0d27c (diff)
remove old closed bugs and todo items to speed up wiki updates and reduce size
Remove closed bugs and todos that were last edited or commented before Q3 2015. Command line used: for f in $(grep -l '\[\[done\]\]' -- *.mdwn); do d="$(echo "$f" | sed 's/.mdwn$//')"; if [ -z "$(git log --since=09-09-2015 --pretty=oneline -- "$f")" -a -z "$(git log --since=09-09-2015 --pretty=oneline -- "$d")" ]; then git rm -- "$f"; git rm -rf "$d"; fi; done for f in $(grep -l '|done\]\]' -- *.mdwn); do d="$(echo "$f" | sed 's/.mdwn$//')"; if [ -z "$(git log --since=09-09-2015 --pretty=oneline -- "$f")" -a -z "$(git log --since=09-09-2015 --pretty=oneline -- "$d")" ]; then git rm -- "$f"; git rm -rf "$d"; fi; done
Diffstat (limited to 'doc/bugs/S3_upload_not_using_multipart')
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_11_ba1f866645419476bbedd6b1e4bbd33f._comment8
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_12_bf98d0c771dfdd15ddafdba2d94d911f._comment8
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_1_5bed9faafc43b535f7820749510aaa14._comment10
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_2_d82952cf324e769e45f4d90f200210f4._comment17
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_3_d878b87a05f4fcd380e6ff309b615aab._comment14
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_4_09a3372fd13734cbb05e79d0ba76d052._comment8
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_5_5add65b5b284f79ec09ee4d0326e7132._comment8
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_6_906abafc53070d8e4f33df486d2241ea._comment12
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_7_f620888512cd78628f82ec9e5eed4ad1._comment21
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_8_4d9242cde0d2348452438659a8aa8d6d._comment8
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_9_1f5578a9100f0f087a558e5e5968d753._comment8
-rw-r--r--doc/bugs/S3_upload_not_using_multipart/comment_9_74b2a392a537dde1c28089f1deed940c._comment31
12 files changed, 0 insertions, 153 deletions
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_11_ba1f866645419476bbedd6b1e4bbd33f._comment b/doc/bugs/S3_upload_not_using_multipart/comment_11_ba1f866645419476bbedd6b1e4bbd33f._comment
deleted file mode 100644
index c2fdae5e9..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_11_ba1f866645419476bbedd6b1e4bbd33f._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- username="joey"
- subject="""comment 11"""
- date="2014-11-03T21:27:22Z"
- content="""
-Now implemented on s3-aws branch. Needs a version of the
-haskell aws library that is not quite released yet (but available in git).
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_12_bf98d0c771dfdd15ddafdba2d94d911f._comment b/doc/bugs/S3_upload_not_using_multipart/comment_12_bf98d0c771dfdd15ddafdba2d94d911f._comment
deleted file mode 100644
index 2be62f668..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_12_bf98d0c771dfdd15ddafdba2d94d911f._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- username="https://www.google.com/accounts/o8/id?id=AItOawnWvnTWY6LrcPB4BzYEBn5mRTpNhg5EtEg"
- nickname="Bence"
- subject="comment 12"
- date="2014-12-08T17:28:52Z"
- content="""
-Linked this bug to [[special remotes/S3|/special_remotes/S3]].
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_1_5bed9faafc43b535f7820749510aaa14._comment b/doc/bugs/S3_upload_not_using_multipart/comment_1_5bed9faafc43b535f7820749510aaa14._comment
deleted file mode 100644
index 117055e3f..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_1_5bed9faafc43b535f7820749510aaa14._comment
+++ /dev/null
@@ -1,10 +0,0 @@
-[[!comment format=mdwn
- username="https://www.google.com/accounts/o8/id?id=AItOawl9sYlePmv1xK-VvjBdN-5doOa_Xw-jH4U"
- nickname="Richard"
- subject="comment 1"
- date="2014-05-13T15:09:42Z"
- content="""
-JFTR, this is impacting DebConf's video storage as well.
-
-Richard
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_2_d82952cf324e769e45f4d90f200210f4._comment b/doc/bugs/S3_upload_not_using_multipart/comment_2_d82952cf324e769e45f4d90f200210f4._comment
deleted file mode 100644
index 7ee3c1167..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_2_d82952cf324e769e45f4d90f200210f4._comment
+++ /dev/null
@@ -1,17 +0,0 @@
-[[!comment format=mdwn
- username="annexuser"
- ip="64.71.7.82"
- subject="comment 2"
- date="2014-07-14T18:21:00Z"
- content="""
-I'm having the same problem. Is there a fix for this yet?
-
- $ git annex version
- git-annex version: 5.20140709-gc75193e
- build flags: Assistant Webapp Webapp-secure Pairing Testsuite S3 WebDAV Inotify DBus DesktopNotify XMPP DNS Feeds Quvi TDFA CryptoHash
- key/value backends: SHA256E SHA1E SHA512E SHA224E SHA384E SKEIN256E SKEIN512E SHA256 SHA1 SHA512 SHA224 SHA384 SKEIN256 SKEIN512 WORM URL
- remote types: git gcrypt S3 bup directory rsync web webdav tahoe glacier ddar hook external
- local repository version: 5
- supported repository version: 5
- upgrade supported from repository versions: 0 1 2 4
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_3_d878b87a05f4fcd380e6ff309b615aab._comment b/doc/bugs/S3_upload_not_using_multipart/comment_3_d878b87a05f4fcd380e6ff309b615aab._comment
deleted file mode 100644
index 46245e657..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_3_d878b87a05f4fcd380e6ff309b615aab._comment
+++ /dev/null
@@ -1,14 +0,0 @@
-[[!comment format=mdwn
- username="http://joeyh.name/"
- ip="209.250.56.112"
- subject="comment 3"
- date="2014-08-02T23:13:41Z"
- content="""
-There is now a workaround; S3 special remotes can be configured to use [[chunking]].
-
-For example, to reconfigure an existing mys3 remote: `enableremote mys3 chunk=1MiB`
-
-I'm leaving this bug open because chunking is not the default (although the assistant does enable it by default), and because this chunking operates at a higher, and less efficient level than S3's own multipart upload API. In particular, AWS will charge a fee for each http request made for a chunk.
-
-Adding proper multipart support will probably require switching to a different S3 library.
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_4_09a3372fd13734cbb05e79d0ba76d052._comment b/doc/bugs/S3_upload_not_using_multipart/comment_4_09a3372fd13734cbb05e79d0ba76d052._comment
deleted file mode 100644
index d36628163..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_4_09a3372fd13734cbb05e79d0ba76d052._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- username="http://joeyh.name/"
- ip="209.250.56.112"
- subject="comment 4"
- date="2014-08-03T18:22:58Z"
- content="""
-The aws library does not support multipart yet either; here's the bug report requesting it: <https://github.com/aristidb/aws/issues/94>
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_5_5add65b5b284f79ec09ee4d0326e7132._comment b/doc/bugs/S3_upload_not_using_multipart/comment_5_5add65b5b284f79ec09ee4d0326e7132._comment
deleted file mode 100644
index 0c7742364..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_5_5add65b5b284f79ec09ee4d0326e7132._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- username="http://joeyh.name/"
- ip="209.250.56.112"
- subject="comment 5"
- date="2014-08-03T18:27:32Z"
- content="""
-However, I don't think that multipart upload actually allows exceeding the S3 limit of 5 GB per object. Configuring the remote with `chunk=100MiB` *does* allow bypassing whatever S3's maximum object size happens to be.
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_6_906abafc53070d8e4f33df486d2241ea._comment b/doc/bugs/S3_upload_not_using_multipart/comment_6_906abafc53070d8e4f33df486d2241ea._comment
deleted file mode 100644
index ad9d4b601..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_6_906abafc53070d8e4f33df486d2241ea._comment
+++ /dev/null
@@ -1,12 +0,0 @@
-[[!comment format=mdwn
- username="http://svario.it/gioele"
- nickname="gioele"
- subject="Multipart S3 support files > 5 GB"
- date="2014-08-04T06:00:45Z"
- content="""
-The [multipart guide](http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html) says that the limit is 5 TB per file.
-
-> **Upload objects in parts—Using the Multipart upload API you can upload large objects, up to 5 TB.**
-
-> The Multipart Upload API is designed to improve the upload experience for larger objects. You can upload objects in parts. These object parts can be uploaded independently, in any order, and in parallel. You can use a Multipart Upload for objects from 5 MB to 5 TB in size. For more information, see Uploading Objects Using Multipart Upload. For more information, see Uploading Objects Using Multipart Upload API.
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_7_f620888512cd78628f82ec9e5eed4ad1._comment b/doc/bugs/S3_upload_not_using_multipart/comment_7_f620888512cd78628f82ec9e5eed4ad1._comment
deleted file mode 100644
index ec47aa2be..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_7_f620888512cd78628f82ec9e5eed4ad1._comment
+++ /dev/null
@@ -1,21 +0,0 @@
-[[!comment format=mdwn
- username="https://www.google.com/accounts/o8/id?id=AItOawl9sYlePmv1xK-VvjBdN-5doOa_Xw-jH4U"
- nickname="Richard"
- subject="comment 7"
- date="2014-09-29T08:07:55Z"
- content="""
-As I found the latest comment confusing, here's the full quote:
-
- Depending on the size of the data you are uploading, Amazon S3 offers the following options:
-
- Upload objects in a single operation—With a single PUT operation you can upload objects up to 5 GB in size.
-
- Upload objects in parts—Using the Multipart upload API you can upload large objects, up to 5 TB.
-
- The Multipart Upload API is designed to improve the upload experience for larger objects. You can upload objects in parts.
- These object parts can be uploaded independently, in any order, and in parallel.
- You can use a Multipart Upload for objects from 5 MB to 5 TB in size.
-
- We encourage Amazon S3 customers to use Multipart Upload for objects greater than 100 MB.
-
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_8_4d9242cde0d2348452438659a8aa8d6d._comment b/doc/bugs/S3_upload_not_using_multipart/comment_8_4d9242cde0d2348452438659a8aa8d6d._comment
deleted file mode 100644
index a427c504e..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_8_4d9242cde0d2348452438659a8aa8d6d._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- username="https://www.google.com/accounts/o8/id?id=AItOawl9sYlePmv1xK-VvjBdN-5doOa_Xw-jH4U"
- nickname="Richard"
- subject="comment 8"
- date="2014-09-29T08:09:33Z"
- content="""
-PS: Chunking spams the S3 remote with individual objects whereas multipart uploads do not. Just something to keep in mind in case you turn on chunking for S3.
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_9_1f5578a9100f0f087a558e5e5968d753._comment b/doc/bugs/S3_upload_not_using_multipart/comment_9_1f5578a9100f0f087a558e5e5968d753._comment
deleted file mode 100644
index 10cab3da9..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_9_1f5578a9100f0f087a558e5e5968d753._comment
+++ /dev/null
@@ -1,8 +0,0 @@
-[[!comment format=mdwn
- username="joey"
- subject="""comment 9"""
- date="2014-10-28T18:25:04Z"
- content="""
-I have a WIP branch `aws-s3-multipart`. I stopped when I got blocked
-by a bad API in the aws library: <https://github.com/aristidb/aws/issues/141>
-"""]]
diff --git a/doc/bugs/S3_upload_not_using_multipart/comment_9_74b2a392a537dde1c28089f1deed940c._comment b/doc/bugs/S3_upload_not_using_multipart/comment_9_74b2a392a537dde1c28089f1deed940c._comment
deleted file mode 100644
index b965ff0ab..000000000
--- a/doc/bugs/S3_upload_not_using_multipart/comment_9_74b2a392a537dde1c28089f1deed940c._comment
+++ /dev/null
@@ -1,31 +0,0 @@
-[[!comment format=mdwn
- username="joey"
- subject="""comment 9"""
- date="2014-10-28T16:42:21Z"
- content="""
-The aws library now supports multipart uploads, using its
-S3.Commands.Multipart module.
-
-I don't think that multipart and chunking fit together: Typically the
-chunks are too small to need multipart for individual chunks. And the
-chunks shouldn't be combined together into a complete object at the end (at
-least not if we care about using chunking to obscure object size).
-Individual chunks sizes can vary when encryption is used, so combining them
-all into one file wouldn't work.
-
-Also, multipart uploads require at least 3 http calls, so there's no point
-using it for small objects, as it would only add overhead.
-
-So, multipart uploads should be used when not chunking, when the object to
-upload exceeds some size, which should probably defaut to something in the
-range of 100 mb to 1 gb.
-
-It might be possible to support resuming of interrupted multipart uploads.
-It seems that git-annex would need to store, locally, the UploadId,
-as well as the list of uploaded parts, including the Etag for the upload
-(which is needed when completing the multipart upload too).
-
-Also it should probably set Expires when initiating the multipart upload,
-so that incomplete ones get cleaned up after some period of time.
-Otherwise, users would probably be billed for them.
-"""]]