summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--Remote/S3.hs15
-rw-r--r--doc/special_remotes/S3.mdwn12
2 files changed, 8 insertions, 19 deletions
diff --git a/Remote/S3.hs b/Remote/S3.hs
index 8d30c7c9b..e0ff93bb3 100644
--- a/Remote/S3.hs
+++ b/Remote/S3.hs
@@ -181,16 +181,9 @@ store r h = fileStorer $ \k f p -> do
}
uploadid <- S3.imurUploadId <$> sendS3Handle h startreq
- {- The actual part size will be a even multiple of the
- - 32k chunk size that hGetUntilMetered uses.
- -
- - Also, half-size parts are used. This is so that
- - the final part of a file can be rolled into the
- - last full-size part, which avoids a problem when the
- - final part could otherwise be too small for S3 to accept
- - it.
- -}
- let partsz' = (partsz `div` toInteger defaultChunkSize `div` 2) * toInteger defaultChunkSize
+ -- The actual part size will be a even multiple of the
+ -- 32k chunk size that hGetUntilMetered uses.
+ let partsz' = (partsz `div` toInteger defaultChunkSize) * toInteger defaultChunkSize
-- Send parts of the file, taking care to stream each part
-- w/o buffering in memory, since the parts can be large.
@@ -202,7 +195,7 @@ store r h = fileStorer $ \k f p -> do
else do
-- Calculate size of part that will
-- be read.
- let sz = if fsz - pos < partsz' * 2
+ let sz = if fsz - pos < partsz'
then fsz - pos
else partsz'
let p' = offsetMeterUpdate p (toBytesProcessed pos)
diff --git a/doc/special_remotes/S3.mdwn b/doc/special_remotes/S3.mdwn
index 59c1abed7..c7c6f76c5 100644
--- a/doc/special_remotes/S3.mdwn
+++ b/doc/special_remotes/S3.mdwn
@@ -21,14 +21,10 @@ the S3 remote.
* `chunk` - Enables [[chunking]] when storing large files.
`chunk=1MiB` is a good starting point for chunking.
-* `partsize` - Amazon S3 only accepts uploads up to a certian file size,
- and storing larger files requires a multipart upload process.
- Setting `partsize=1GiB` is recommended for Amazon S3; this will
- cause multipart uploads to be done using parts up to 1GiB in size.
-
- This is not enabled by default, since other S3 implementations may
- not support multipart uploads, but can be enabled or changed at any
- time.
+* `partsize` - Specifies the largest object to attempt to store in the
+ bucket. Multipart uploads will be used when storing larger objects.
+ This is not enabled by default, but can be enabled or changed at any
+ time. Setting `partsize=1GiB` is reasonable for S3.
* `keyid` - Specifies the gpg key to use for [[encryption]].