summaryrefslogtreecommitdiff
path: root/Remote/S3.hs
Commit message (Collapse)AuthorAge
* implement exporttree=yes configurationGravatar Joey Hess2017-09-04
| | | | | | | | | | | | | | | | * Only export to remotes that were initialized to support it. * Prevent storing key/value on export remotes. * Prevent enabling exporttree=yes and encryption in the same remote. SetupStage Enable was changed to take the old RemoteConfig. This allowed only setting exporttree when initially setting up a remote, and not configuring it later after stuff might already be stored in the remote. Went with =yes rather than =true for consistency with other parts of git-annex. Changed docs accordingly. This commit was supported by the NSF-funded DataLad project.
* refactor ExportActionsGravatar Joey Hess2017-09-01
| | | | | | | | This will allow disabling exports for remotes that are not configured to allow them. Also, exportSupported will be useful for the external special remote to probe. This commit was supported by the NSF-funded DataLad project
* add API for exportingGravatar Joey Hess2017-08-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implemented so far for the directory special remote. Several remotes don't make sense to export to. Regular Git remotes, obviously, do not. Bup remotes almost certianly do not, since bup would need to be used to extract the export; same store for Ddar. Web and Bittorrent are download-only. GCrypt is always encrypted so exporting to it would be pointless. There's probably no point complicating the Hook remotes with exporting at this point. External, S3, Glacier, WebDAV, Rsync, and possibly Tahoe should be modified to support export. Thought about trying to reuse the storeKey/retrieveKeyFile/removeKey interface, rather than adding a new interface. But, it seemed better to keep it separate, to avoid a complicated interface that sometimes encrypts/chunks key/value storage and sometimes users non-key/value storage. Any common parts can be factored out. Note that storeExport is not atomic. doc/design/exporting_trees_to_special_remotes.mdwn has some things in the "resuming exports" section that bear on this decision. Basically, I don't think, at this time, that an atomic storeExport would help with resuming, because exports are not key/value storage, and we can't be sure that a partially uploaded file is the same content we're currently trying to export. Also, note that ExportLocation will always use unix path separators. This is important, because users may export from a mix of windows and unix, and it avoids complicating the API with path conversions, and ensures that in such a mix, they always use the same locations for exports. This commit was sponsored by Bruno BEAUFILS on Patreon.
* fix build with old http-client versionsGravatar Joey Hess2017-08-17
|
* Disable http-client's default 30 second response timeout when HEADing an url ↵Gravatar Joey Hess2017-08-15
| | | | to check if it exists. Some web servers take quite a long time to answer a HEAD request.
* adeiu, MissingHGravatar Joey Hess2017-05-16
| | | | | | | | | | | | | | | | Removed dependency on MissingH, instead depending on the split library. After laying groundwork for this since 2015, it was mostly straightforward. Added Utility.Tuple and Utility.Split. Eyeballed System.Path.WildMatch while implementing the same thing. Since MissingH's progress meter display was being used, I re-implemented my own. Bonus: Now progress is displayed for transfers of files of unknown size. This commit was sponsored by Shane-o on Patreon.
* S3: Fix check of uuid file stored in bucket, which was not working.Gravatar Joey Hess2017-02-13
| | | | | | | | | | | | | | The check was broken in two ways.. First, nowhere did it error out when checkUUIDFile found a different UUID already in the file. Instead, it overwrote the uuid file. And, checkUUIDFile's implementation was for some reason always failing with a ConnectionClosed exception. Apparently something to do with using two different runResourceT's and a response getting GCed inbetween. I'm pretty sure that used to work, but changed to a more obviously correct implementation. This commit was sponsored by Peter Hogg on Patreon.
* add SetupStage parameter to RemoteType.setupGravatar Joey Hess2017-02-07
| | | | | | | | | | | | | | | | | Most remotes have an idempotent setup that can be reused for enableremote, but in a few cases, it needs to tell which, and whether a UUID was provided to setup was used. This is groundwork for making initremote be able to provide a UUID. It should not change any behavior. Note that it would be nice to make the UUID always be provided to setup, and make setup not need to generate and return a UUID. What prevented this simplification is Remote.Git.gitSetup, which needs to reuse the UUID of the git remote when setting it up, and so has to return that UUID. This commit was sponsored by Thom May on Patreon.
* Fix build with aws 0.16. Thanks, aristidb.Gravatar Joey Hess2017-02-07
|
* fix build warningGravatar Joey Hess2016-12-10
|
* Remove http-conduit (<2.2.0) constraintGravatar Alper Nebi Yasak2016-12-10
| | | | | | | | | | Since https://github.com/aristidb/aws/issues/206 is resolved, this constraint is no longer necessary. However, http-conduit (>=2.2.0) requires http-client (>=0.5.0) which introduces some breaking changes. This commit also implements those changes depending on the version. Fixes: https://git-annex.branchable.com/bugs/Build_with_aws_head_fails/ Signed-off-by: Alper Nebi Yasak <alpernebiyasak@gmail.com>
* more p2p progress metersGravatar Joey Hess2016-12-07
| | | | | | | | | Display progress meter on send and receive from remote. Added a new hGetMetered that can read an exact number of bytes (or less), updating a meter as it goes. This commit was sponsored by Andreas on Patreon.
* Avoid backtraces on expected failures when built with ghc 8; only use ↵Gravatar Joey Hess2016-11-15
| | | | | | | | | | | | | backtraces for unexpected errors. ghc 8 added backtraces on uncaught errors. This is great, but git-annex was using error in many places for a error message targeted at the user, in some known problem case. A backtrace only confuses such a message, so omit it. Notably, commands like git annex drop that failed due to eg, numcopies, used to use error, so had a backtrace. This commit was sponsored by Ethan Aubin.
* plumb RemoteGitConfig through to decryptCipherGravatar Joey Hess2016-05-23
|
* plumb RemoteGitConfig through to setRemoteCredPairGravatar Joey Hess2016-05-23
|
* Pass the various gnupg-options configs to gpg in several cases where they ↵Gravatar Joey Hess2016-05-23
| | | | | | | | | | | | were not before. Removed the instance LensGpgEncParams RemoteConfig because it encouraged code that does not take the RemoteGitConfig into account. RemoteType's setup was changed to take a RemoteGitConfig, although the only place that is able to provide a non-empty one is enableremote, when it's changing an existing remote. This led to several folow-on changes, and got RemoteGitConfig plumbed through.
* improve info display of OtherStorageClassGravatar Joey Hess2016-05-05
|
* S3: Allow configuring with requeststyle=path to use path-style bucket access ↵Gravatar Joey Hess2016-02-09
| | | | | | instead of the default DNS-style access. untested
* remove 163 lines of code without changing anything except importsGravatar Joey Hess2016-01-20
|
* Display progress meter in -J mode when downloading from the web.Gravatar Joey Hess2015-11-16
| | | | | Including in addurl, and get --from web, but also in S3 and External special remotes when a web url is known for content in those remotes.
* Fix failure to build with aws-0.13.0 and finish nearline support.Gravatar Joey Hess2015-11-02
| | | | | | * Fix failure to build with aws-0.13.0. * When built with aws-0.13.0, the S3 special remote can be used to create google nearline buckets, by setting storageclass=NEARLINE.
* S3: Fix support for using https.Gravatar Joey Hess2015-10-15
| | | | Was using the http-only Manager before, not the tls-capable one.
* add removeKey action to RemoteGravatar Joey Hess2015-10-08
| | | | | Not implemented for any remotes yet; probably the git remote is the only one that will ever implement it.
* avoid deprecation warnings when built with http-client >= 0.4.18Gravatar Joey Hess2015-10-01
| | | | | | | | | | Since I want git-annex to keep building on debian stable, I need to still support the old http-client, which required explicit calls to closeManager, or use of withManager to get Managers to close at appropriate times. This is not needed in the new version, and so they added a deprecation warning. IMHO much too early, because look at the mess I had to go through to avoid that deprecation warning while supporting both versions..
* avoid hard dependency on new version of awsGravatar Joey Hess2015-09-22
|
* S3 storage classes expansionGravatar Joey Hess2015-09-17
| | | | | | | | | | Added support for storageclass=STANDARD_IA to use Amazon's new Infrequently Accessed storage. Also allows using storageclass=NEARLINE to use Google's NearLine storage. The necessary changes to aws to support this are in https://github.com/aristidb/aws/pull/176
* refactorGravatar Joey Hess2015-08-17
|
* Simplify setup process for a ssh remote.Gravatar Joey Hess2015-08-05
| | | | | | | | | | | | | | | | | | | | | | Now it suffices to run git remote add, followed by git-annex sync. Now the remote is automatically initialized for use by git-annex, where before the git-annex branch had to manually be pushed before using git-annex sync. Note that this involved changes to git-annex-shell, so if the remote is using an old version, the manual push is still needed. Implementation required git-annex-shell be changed, so configlist can autoinit a repository even when no git-annex branch has been pushed yet. Unfortunate because we'll have to wait for it to get deployed to servers before being able to rely on this change in the documentation. Did consider making git-annex sync push the git-annex branch to repos that didn't have a uuid, but this seemed difficult to do without complicating it in messy ways. It would be cleaner to split a command out from configlist to handle the initialization. But this is difficult without sacrificing backwards compatability, for users of old git-annex versions which would not use the new command.
* layoutGravatar Joey Hess2015-06-15
|
* show S3 urls for public repos in whereisGravatar Joey Hess2015-06-05
| | | | | | Note that it's possible for a S3 bucket to be configured to allow public access, but for git-annex to not know that it is. I chose to not show the url unless public=yes.
* S3: Publically accessible buckets can be used without creds.Gravatar Joey Hess2015-06-05
|
* public=yes config to send AclPublicReadGravatar Joey Hess2015-06-05
| | | | | | | | | | | In my tests, this has to be set when uploading a file to the bucket and then the file can be accessed using the bucketname.s3.amazonaws.com url. Setting it when creating the bucket didn't seem to make the whole bucket public, or allow accessing files stored in it. But I have gone ahead and also sent it when creating the bucket just in case that is needed in some case.
* groundwork for readonly accessGravatar Joey Hess2015-06-05
| | | | Split S3Info out of S3Handle and added some stubs
* Merge branch 'master' into concurrentprogressGravatar Joey Hess2015-05-12
|\ | | | | | | | | | | | | | | | | | | | | | | Conflicts: Command/Fsck.hs Messages.hs Remote/Directory.hs Remote/Git.hs Remote/Helper/Special.hs Types/Remote.hs debian/changelog git-annex.cabal
| * generalied elem/notElem in ghc 7.10 require some additional type signatures ↵Gravatar Joey Hess2015-05-10
| | | | | | | | when using OverloadedStrings
| * S3: Fix incompatability with bucket names used by hS3; the aws library ↵Gravatar Joey Hess2015-04-27
| | | | | | | | | | | | | | | | | | cannot handle upper-case bucket names. git-annex now converts them to lower case automatically. For example, it failed to get files from a bucket named S3. Also fixes `git annex initremote UPPERCASE type=S3`, which failed with the new aws library, with a signing error message.
| * S3: git annex enableremote will not create a bucket name, which failed since ↵Gravatar Joey Hess2015-04-23
| | | | | | | | the bucket already exists.
| * S3: git annex info will show additional information about a S3 remote ↵Gravatar Joey Hess2015-04-23
| | | | | | | | (endpoint, port, storage class)
| * convert all log prorities, not just debugGravatar Joey Hess2015-04-21
| | | | | | | | In particular, error should go to stderr
| * S3: Enable debug logging when annex.debug or --debug is set.Gravatar Joey Hess2015-04-21
| | | | | | | | To debug a bug report, but generally useful.
* | add filename to progress bar, and display ok/failed at endGravatar Joey Hess2015-04-14
|/ | | | This needed plumbing an AssociatedFile through retrieveKeyFileCheap.
* update my email address and homepage urlGravatar Joey Hess2015-01-21
|
* add getFileSize, which can get the real size of a large file on WindowsGravatar Joey Hess2015-01-20
| | | | | | | | | | | | | | Avoid using fileSize which maxes out at just 2 gb on Windows. Instead, use hFileSize, which doesn't have a bounded size. Fixes support for files > 2 gb on Windows. Note that the InodeCache code only needs to compare a file size, so it doesn't matter it the file size wraps. So it has been left as-is. This was necessary both to avoid invalidating existing inode caches, and because the code passed FileStatus around and would have become more expensive if it called getFileSize. This commit was sponsored by Christian Dietrich.
* Fix build with -f-S3.Gravatar Joey Hess2014-12-19
|
* reformatGravatar Joey Hess2014-12-16
|
* Expand checkurl to support recommended filename, and multi-file-urlsGravatar Joey Hess2014-12-11
| | | | This commit was sponsored by an anonymous bitcoiner.
* Urls can now be claimed by remotes. This will allow creating, for example, a ↵Gravatar Joey Hess2014-12-08
| | | | external special remote that handles magnet: and *.torrent urls.
* add stub claimUrlGravatar Joey Hess2014-12-08
|
* support S3 front-end used by globalways.netGravatar Joey Hess2014-11-05
| | | | | | | | | | This threw an unusual exception w/o an error message when probing to see if the bucket exists yet. So rather than relying on tryS3, catch all exceptions. This does mean that it might get an exception for some transient network error, think this means the bucket DNE yet, and try to create it, and then fail when it already exists.
* Revert "work around minimum part size problem"Gravatar Joey Hess2014-11-04
| | | | | | This reverts commit 2ba5af49c94b97c586220c3553367988ef095934. I misunderstood the cause of the problem.