summaryrefslogtreecommitdiff
path: root/Remote/S3real.hs
Commit message (Collapse)AuthorAge
* unify elipsis handlingGravatar Joey Hess2011-07-19
| | | | | And add a simple dots-based progress display, currently only used in v2 upgrade.
* finished hlint passGravatar Joey Hess2011-07-15
|
* renameGravatar Joey Hess2011-07-05
|
* renamed GitRepo to GitGravatar Joey Hess2011-06-30
| | | | It was always imported qualified as Git anyway
* rename modules for data types into Types/ directoryGravatar Joey Hess2011-06-01
|
* more pointless monadic golfingGravatar Joey Hess2011-05-16
|
* IA: do not create bucket at initremote timeGravatar Joey Hess2011-05-16
| | | | | This way, the metadata sent when uploading a file is applied to the bucket then.
* add a few tweaks to make it easy to use the Internet Archive's variant of S3Gravatar Joey Hess2011-05-16
| | | | | | | | In particular, munge key filenames to comply with the IA's filename limits, disable encryption, support their nonstandard way of creating buckets, and allow x-amz-* headers to be specified in initremote to set item metadata. Still TODO: initremote does not handle multiword metadata headers right.
* refactorGravatar Joey Hess2011-05-16
|
* simplified a bunch of Maybe handlingGravatar Joey Hess2011-05-15
|
* avoid always decrypting cipherGravatar Joey Hess2011-05-01
| | | | | Last change moved cipher decryption to remote setup time. Fixed this with a bit of a hack.
* factor out base64 codeGravatar Joey Hess2011-05-01
|
* S3: When encryption is enabled, the Amazon S3 login credentials are stored, ↵Gravatar Joey Hess2011-05-01
| | | | encrypted, in .git-annex/remotes.log, so environment variables need not be set after the remote is initialized.
* rsync special remoteGravatar Joey Hess2011-04-27
| | | | | | | | | Fully tested and working, including resuming and encryption. (Though not resuming when sending *with* encryption; gpg doesn't produce identical output each time.) Uses same layout as the directory special remote and the .git/annex/objects/ directory.
* ensure tmp dir existsGravatar Joey Hess2011-04-21
|
* fix S3 upload buffering problemGravatar Joey Hess2011-04-21
| | | | Provide file size to new version of hS3.
* update on memory leakGravatar Joey Hess2011-04-19
| | | | | Finished applying to S3 the change that fixed the memory leak in bup, but it didn't seem to help S3.. with encryption it still grows to 2x file size.
* bup: Avoid memory leak when transferring encrypted data.Gravatar Joey Hess2011-04-19
| | | | | | | | | | | | | | | | | | | | | | | | This was a most surprising leak. It occurred in the process that is forked off to feed data to gpg. That process was passed a lazy ByteString of input, and ghc seemed to not GC the ByteString as it was lazily read and consumed, so memory slowly leaked as the file was read and passed through gpg to bup. To fix it, I simply changed the feeder to take an IO action that returns the lazy bytestring, and fed the result directly to hPut. AFAICS, this should change nothing WRT buffering. But somehow it makes ghc's GC do the right thing. Probably I triggered some weakness in ghc's GC (version 6.12.1). (Note that S3 still has this leak, and others too. Fixing it will involve another dance with the type system.) Update: One theory I have is that this has something to do with the forking of the feeder process. Perhaps, when the ByteString is produced before the fork, ghc decides it need to hold a pointer to the start of it, for some reason -- maybe it doesn't realize that it is only used in the forked process.
* refactorGravatar Joey Hess2011-04-19
|
* Fix stalls in S3 when transferring encrypted data.Gravatar Joey Hess2011-04-19
| | | | | | | | | | | | | | | | | | | Stalls were caused by code that did approximatly: content' <- liftIO $ withEncryptedContent cipher content return store content' The return evaluated without actually reading content from S3, and so the cleanup code began waiting on gpg to exit before gpg could send all its data. Fixing it involved moving the `store` type action into the IO monad: liftIO $ withEncryptedContent cipher content store Which was a bit of a pain to do, thank you type system, but avoids the problem as now the whole content is consumed, and stored, before cleanup.
* S3 crypto supportGravatar Joey Hess2011-04-17
| | | | | | Untested, I will need to dust off my S3 keys, and plug the modem back in that was unplugged last night due to very low battery bank power. But it compiles, so it's probably perfect. :)
* renameGravatar Joey Hess2011-04-17
|
* encryption key management workingGravatar Joey Hess2011-04-16
| | | | | Encrypted remotes don't yet encrypt data, but git annex initremote can be used to generate a cipher and add additional gpg keys that can use it.
* RemoteConfig typeGravatar Joey Hess2011-04-15
|
* improve robustness when S3 access tokens are is not configuredGravatar Joey Hess2011-03-30
|
* cost bugfixesGravatar Joey Hess2011-03-30
|
* allow directory remotes to be in different locationsGravatar Joey Hess2011-03-30
| | | | | | Two machines might have access to the same directory remote on different paths, so don't include the path in its persistent config, instead use the git config to record it.
* boilerplate reductionGravatar Joey Hess2011-03-30
|
* nasty hack to build when hS3 is not availableGravatar Joey Hess2011-03-30
So, it would be nicer to just use Cabal and take advantage of its conditional compilation support. But, Cabal seems to lack good support for a package with an internal library that is used by multiple executables. It wants to build everything twice or more. That's too slow for me. Anyway, fairly soon, I expect to upgrade hS3 to a requirment, and I can just revert this.