summaryrefslogtreecommitdiff
path: root/Remote/S3.hs
Commit message (Collapse)AuthorAge
* avoid hard dependency on new version of awsGravatar Joey Hess2015-09-22
|
* S3 storage classes expansionGravatar Joey Hess2015-09-17
| | | | | | | | | | Added support for storageclass=STANDARD_IA to use Amazon's new Infrequently Accessed storage. Also allows using storageclass=NEARLINE to use Google's NearLine storage. The necessary changes to aws to support this are in https://github.com/aristidb/aws/pull/176
* refactorGravatar Joey Hess2015-08-17
|
* Simplify setup process for a ssh remote.Gravatar Joey Hess2015-08-05
| | | | | | | | | | | | | | | | | | | | | | Now it suffices to run git remote add, followed by git-annex sync. Now the remote is automatically initialized for use by git-annex, where before the git-annex branch had to manually be pushed before using git-annex sync. Note that this involved changes to git-annex-shell, so if the remote is using an old version, the manual push is still needed. Implementation required git-annex-shell be changed, so configlist can autoinit a repository even when no git-annex branch has been pushed yet. Unfortunate because we'll have to wait for it to get deployed to servers before being able to rely on this change in the documentation. Did consider making git-annex sync push the git-annex branch to repos that didn't have a uuid, but this seemed difficult to do without complicating it in messy ways. It would be cleaner to split a command out from configlist to handle the initialization. But this is difficult without sacrificing backwards compatability, for users of old git-annex versions which would not use the new command.
* layoutGravatar Joey Hess2015-06-15
|
* show S3 urls for public repos in whereisGravatar Joey Hess2015-06-05
| | | | | | Note that it's possible for a S3 bucket to be configured to allow public access, but for git-annex to not know that it is. I chose to not show the url unless public=yes.
* S3: Publically accessible buckets can be used without creds.Gravatar Joey Hess2015-06-05
|
* public=yes config to send AclPublicReadGravatar Joey Hess2015-06-05
| | | | | | | | | | | In my tests, this has to be set when uploading a file to the bucket and then the file can be accessed using the bucketname.s3.amazonaws.com url. Setting it when creating the bucket didn't seem to make the whole bucket public, or allow accessing files stored in it. But I have gone ahead and also sent it when creating the bucket just in case that is needed in some case.
* groundwork for readonly accessGravatar Joey Hess2015-06-05
| | | | Split S3Info out of S3Handle and added some stubs
* Merge branch 'master' into concurrentprogressGravatar Joey Hess2015-05-12
|\ | | | | | | | | | | | | | | | | | | | | | | Conflicts: Command/Fsck.hs Messages.hs Remote/Directory.hs Remote/Git.hs Remote/Helper/Special.hs Types/Remote.hs debian/changelog git-annex.cabal
| * generalied elem/notElem in ghc 7.10 require some additional type signatures ↵Gravatar Joey Hess2015-05-10
| | | | | | | | when using OverloadedStrings
| * S3: Fix incompatability with bucket names used by hS3; the aws library ↵Gravatar Joey Hess2015-04-27
| | | | | | | | | | | | | | | | | | cannot handle upper-case bucket names. git-annex now converts them to lower case automatically. For example, it failed to get files from a bucket named S3. Also fixes `git annex initremote UPPERCASE type=S3`, which failed with the new aws library, with a signing error message.
| * S3: git annex enableremote will not create a bucket name, which failed since ↵Gravatar Joey Hess2015-04-23
| | | | | | | | the bucket already exists.
| * S3: git annex info will show additional information about a S3 remote ↵Gravatar Joey Hess2015-04-23
| | | | | | | | (endpoint, port, storage class)
| * convert all log prorities, not just debugGravatar Joey Hess2015-04-21
| | | | | | | | In particular, error should go to stderr
| * S3: Enable debug logging when annex.debug or --debug is set.Gravatar Joey Hess2015-04-21
| | | | | | | | To debug a bug report, but generally useful.
* | add filename to progress bar, and display ok/failed at endGravatar Joey Hess2015-04-14
|/ | | | This needed plumbing an AssociatedFile through retrieveKeyFileCheap.
* update my email address and homepage urlGravatar Joey Hess2015-01-21
|
* add getFileSize, which can get the real size of a large file on WindowsGravatar Joey Hess2015-01-20
| | | | | | | | | | | | | | Avoid using fileSize which maxes out at just 2 gb on Windows. Instead, use hFileSize, which doesn't have a bounded size. Fixes support for files > 2 gb on Windows. Note that the InodeCache code only needs to compare a file size, so it doesn't matter it the file size wraps. So it has been left as-is. This was necessary both to avoid invalidating existing inode caches, and because the code passed FileStatus around and would have become more expensive if it called getFileSize. This commit was sponsored by Christian Dietrich.
* Fix build with -f-S3.Gravatar Joey Hess2014-12-19
|
* reformatGravatar Joey Hess2014-12-16
|
* Expand checkurl to support recommended filename, and multi-file-urlsGravatar Joey Hess2014-12-11
| | | | This commit was sponsored by an anonymous bitcoiner.
* Urls can now be claimed by remotes. This will allow creating, for example, a ↵Gravatar Joey Hess2014-12-08
| | | | external special remote that handles magnet: and *.torrent urls.
* add stub claimUrlGravatar Joey Hess2014-12-08
|
* support S3 front-end used by globalways.netGravatar Joey Hess2014-11-05
| | | | | | | | | | This threw an unusual exception w/o an error message when probing to see if the bucket exists yet. So rather than relying on tryS3, catch all exceptions. This does mean that it might get an exception for some transient network error, think this means the bucket DNE yet, and try to create it, and then fail when it already exists.
* Revert "work around minimum part size problem"Gravatar Joey Hess2014-11-04
| | | | | | This reverts commit 2ba5af49c94b97c586220c3553367988ef095934. I misunderstood the cause of the problem.
* work around minimum part size problemGravatar Joey Hess2014-11-04
| | | | | | | | | When uploading the last part of a file, which was 640229 bytes, S3 rejected that part: "Your proposed upload is smaller than the minimum allowed size" I don't know what the minimum is, but the fix is just to include the last part into the previous part. Since this can result in a part that's double-sized, use half-sized parts normally.
* fix a couple type errors and the progress barGravatar Joey Hess2014-11-04
|
* fix memory leakGravatar Joey Hess2014-11-04
| | | | | | | | | | Unfortunately, I don't fully understand why it was leaking using the old method of a lazy bytestring. I just know that it was leaking, despite neither hGetUntilMetered nor byteStringPopper seeming to leak by themselves. The new method avoids the lazy bytestring, and simply reads chunks from the handle and streams them out to the http socket.
* combine 2 checksGravatar Joey Hess2014-11-04
|
* casts; now fully working.. but still leakingGravatar Joey Hess2014-11-03
| | | | | Still seems to buffer the whole partsize in memory, but I'm pretty sure my code is not what's doing it. See https://github.com/aristidb/aws/issues/142
* this should avoid leaking memoryGravatar Joey Hess2014-11-03
|
* logic errorGravatar Joey Hess2014-11-03
|
* WIP 3Gravatar Joey Hess2014-11-03
|
* WIP 2Gravatar Joey Hess2014-11-03
|
* WIP try sending using RequestBodyStreamChunkedGravatar Joey Hess2014-11-03
| | | | | May not work; if it does this is gonna be the simplest way to get good memory size and progress reporting.
* link to memory leak bugGravatar Joey Hess2014-11-03
|
* improve info display for multipartGravatar Joey Hess2014-11-03
|
* fix buildGravatar Joey Hess2014-11-03
|
* adjust version checkGravatar Joey Hess2014-11-03
| | | | | I assume 0.10.6 will have the fix for the bug I reported, which got fixed in master already..
* show multipart configuration in git annex info s3remoteGravatar Joey Hess2014-11-03
|
* finish multipart support using unreleased update to aws lib to yield etagsGravatar Joey Hess2014-11-03
| | | | | | | | | | | | | | | | | | Untested and not even compiled yet. Testing should include checks that file content streams through without buffering in memory. Note that CL.consume causes all the etags to be buffered in memory. This is probably nearly unavoidable, since a request has to be constructed that contains the list of etags in its body. (While it might be possible to stream generation of the body, that would entail making a http request that dribbles out parts of the body as the multipart uploads complete, which is not likely to work well.. To limit this being a problem, it's best for partsize to be set to some suitably large value, like 1gb. Then a full terabyte file will need only 1024 etags to be stored, which will probably use around 1 mb of memory.
* WIP multipart S3 uploadGravatar Joey Hess2014-10-28
| | | | | | | | | | | | I'm a little stuck on getting the list of etags of the parts. This seems to require taking the md5 of each part locally, which doesn't get along well with lazily streaming in the part from the file. It would need to read the file twice, or lose laziness and buffer a whole part -- but parts might be quite large. This seems to be a problem with the API provided; S3 is supposed to return an etag, but that is not exposed. I have filed a bug: https://github.com/aristidb/aws/issues/141
* fix buildGravatar Joey Hess2014-10-23
|
* fix buildGravatar Joey Hess2014-10-23
|
* update for aws 0.10's better handling of DNE for HEADGravatar Joey Hess2014-10-23
| | | | Kept support for older aws, since Debian has 0.9.2 still.
* fix buildGravatar Joey Hess2014-10-23
|
* one last build fix, yes it builds nowGravatar Joey Hess2014-10-23
|
* needs type familiesGravatar Joey Hess2014-10-23
|
* fix buildGravatar Joey Hess2014-10-23
|