aboutsummaryrefslogtreecommitdiff
path: root/doc/special_remotes/S3.mdwn
diff options
context:
space:
mode:
authorGravatar Joey Hess <joey@kitenet.net>2014-11-04 16:22:29 -0400
committerGravatar Joey Hess <joey@kitenet.net>2014-11-04 16:38:46 -0400
commit9756a35112672629cd31a00390c8326478b2b7e8 (patch)
tree597e65a62f22d10dd37924c305fb09eea9f3b107 /doc/special_remotes/S3.mdwn
parentc3a7e577df99ba3ba2ee4ffb480bf49b0eaa7739 (diff)
better partsize docs
The minimum allowsed size actually refers to the part size!
Diffstat (limited to 'doc/special_remotes/S3.mdwn')
-rw-r--r--doc/special_remotes/S3.mdwn16
1 files changed, 12 insertions, 4 deletions
diff --git a/doc/special_remotes/S3.mdwn b/doc/special_remotes/S3.mdwn
index c7c6f76c5..aac66abeb 100644
--- a/doc/special_remotes/S3.mdwn
+++ b/doc/special_remotes/S3.mdwn
@@ -21,10 +21,18 @@ the S3 remote.
* `chunk` - Enables [[chunking]] when storing large files.
`chunk=1MiB` is a good starting point for chunking.
-* `partsize` - Specifies the largest object to attempt to store in the
- bucket. Multipart uploads will be used when storing larger objects.
- This is not enabled by default, but can be enabled or changed at any
- time. Setting `partsize=1GiB` is reasonable for S3.
+* `partsize` - Amazon S3 only accepts uploads up to a certian file size,
+ and storing larger files requires a multipart upload process.
+
+ Setting `partsize=1GiB` is recommended for Amazon S3; this will
+ cause multipart uploads to be done using parts up to 1GiB in size.
+ Note that setting partsize to less than 100MiB will cause Amazon S3 to
+ reject uploads.
+
+ This is not enabled by default, since other S3 implementations may
+ not support multipart uploads or have different limits,
+ but can be enabled or changed at any time.
+ time.
* `keyid` - Specifies the gpg key to use for [[encryption]].