From 2ba5af49c94b97c586220c3553367988ef095934 Mon Sep 17 00:00:00 2001 From: Joey Hess Date: Tue, 4 Nov 2014 16:06:13 -0400 Subject: work around minimum part size problem When uploading the last part of a file, which was 640229 bytes, S3 rejected that part: "Your proposed upload is smaller than the minimum allowed size" I don't know what the minimum is, but the fix is just to include the last part into the previous part. Since this can result in a part that's double-sized, use half-sized parts normally. --- doc/special_remotes/S3.mdwn | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) (limited to 'doc/special_remotes') diff --git a/doc/special_remotes/S3.mdwn b/doc/special_remotes/S3.mdwn index c7c6f76c5..59c1abed7 100644 --- a/doc/special_remotes/S3.mdwn +++ b/doc/special_remotes/S3.mdwn @@ -21,10 +21,14 @@ the S3 remote. * `chunk` - Enables [[chunking]] when storing large files. `chunk=1MiB` is a good starting point for chunking. -* `partsize` - Specifies the largest object to attempt to store in the - bucket. Multipart uploads will be used when storing larger objects. - This is not enabled by default, but can be enabled or changed at any - time. Setting `partsize=1GiB` is reasonable for S3. +* `partsize` - Amazon S3 only accepts uploads up to a certian file size, + and storing larger files requires a multipart upload process. + Setting `partsize=1GiB` is recommended for Amazon S3; this will + cause multipart uploads to be done using parts up to 1GiB in size. + + This is not enabled by default, since other S3 implementations may + not support multipart uploads, but can be enabled or changed at any + time. * `keyid` - Specifies the gpg key to use for [[encryption]]. -- cgit v1.2.3