summaryrefslogtreecommitdiff
path: root/doc/special_remotes/S3.mdwn
diff options
context:
space:
mode:
authorGravatar Joey Hess <joey@kitenet.net>2014-11-04 16:21:55 -0400
committerGravatar Joey Hess <joey@kitenet.net>2014-11-04 16:21:55 -0400
commitc3a7e577df99ba3ba2ee4ffb480bf49b0eaa7739 (patch)
treec04170ca15eb3c89070c714df63795fedab1b1d9 /doc/special_remotes/S3.mdwn
parent2ba5af49c94b97c586220c3553367988ef095934 (diff)
Revert "work around minimum part size problem"
This reverts commit 2ba5af49c94b97c586220c3553367988ef095934. I misunderstood the cause of the problem.
Diffstat (limited to 'doc/special_remotes/S3.mdwn')
-rw-r--r--doc/special_remotes/S3.mdwn12
1 files changed, 4 insertions, 8 deletions
diff --git a/doc/special_remotes/S3.mdwn b/doc/special_remotes/S3.mdwn
index 59c1abed7..c7c6f76c5 100644
--- a/doc/special_remotes/S3.mdwn
+++ b/doc/special_remotes/S3.mdwn
@@ -21,14 +21,10 @@ the S3 remote.
* `chunk` - Enables [[chunking]] when storing large files.
`chunk=1MiB` is a good starting point for chunking.
-* `partsize` - Amazon S3 only accepts uploads up to a certian file size,
- and storing larger files requires a multipart upload process.
- Setting `partsize=1GiB` is recommended for Amazon S3; this will
- cause multipart uploads to be done using parts up to 1GiB in size.
-
- This is not enabled by default, since other S3 implementations may
- not support multipart uploads, but can be enabled or changed at any
- time.
+* `partsize` - Specifies the largest object to attempt to store in the
+ bucket. Multipart uploads will be used when storing larger objects.
+ This is not enabled by default, but can be enabled or changed at any
+ time. Setting `partsize=1GiB` is reasonable for S3.
* `keyid` - Specifies the gpg key to use for [[encryption]].