aboutsummaryrefslogtreecommitdiff
path: root/Remote
Commit message (Collapse)AuthorAge
...
* improve display when lockcontent failsGravatar Joey Hess2015-10-09
| | | | | | | | /dev/null stderr; ssh is still able to display a password prompt despite this Show some messages so the user knows it's locking a remote, and knows if that locking failed.
* implement lockContent for ssh remotesGravatar Joey Hess2015-10-09
|
* fix local dropping to not require extra locking of copies, but only that the ↵Gravatar Joey Hess2015-10-09
| | | | local copy be locked for removal
* fix lockKey to run callback in original Annex monad, not local remote'sGravatar Joey Hess2015-10-09
|
* content locking during drop working for local git remotesGravatar Joey Hess2015-10-09
| | | | Only ssh remotes lack locking now
* add removeKey action to RemoteGravatar Joey Hess2015-10-08
| | | | | Not implemented for any remotes yet; probably the git remote is the only one that will ever implement it.
* add lockContentSharedGravatar Joey Hess2015-10-08
| | | | | | | | Also, rename lockContent to lockContentExclusive inAnnexSafe should perhaps be eliminated, and instead use `lockContentShared inAnnex`. However, I'm waiting on that, as there are only 2 call sites for inAnnexSafe and it's fiddly.
* other 80% of avoding verification when hard linking to objects in shared repoGravatar Joey Hess2015-10-02
| | | | | | | | | | | | | | | | | | | | In c3b38fb2a075b4250e867ebd910324c65712c747, it actually only handled uploading objects to a shared repository. To avoid verification when downloading objects from a shared repository, was a lot harder. On the plus side, if the process of downloading a file from a remote is able to verify its content on the side, the remote can indicate this now, and avoid the extra post-download verification. As of yet, I don't have any remotes (except Git) using this ability. Some more work would be needed to support it in special remotes. It would make sense for tahoe to implicitly verify things downloaded from it; as long as you trust your tahoe server (which typically runs locally), there's cryptographic integrity. OTOH, despite bup being based on shas, a bup repo under an attacker's control could have the git ref used for an object changed, and so a bup repo shouldn't implicitly verify. Indeed, tahoe seems unique in being trustworthy enough to implicitly verify.
* avoid verification when hard linking to objects in shared repositoryGravatar Joey Hess2015-10-02
| | | | Such a repository is implicitly trusted, so there's no point.
* Do verification of checksums of annex objects downloaded from remotes.Gravatar Joey Hess2015-10-01
| | | | | | | | | | | | | | | | * When annex objects are received into git repositories, their checksums are verified then too. * To get the old, faster, behavior of not verifying checksums, set annex.verify=false, or remote.<name>.annex-verify=false. * setkey, rekey: These commands also now verify that the provided file matches the key, unless annex.verify=false. * reinject: Already verified content; this can now be disabled by setting annex.verify=false. recvkey and reinject already did verification, so removed now duplicate code from them. fsck still does its own verification, which is ok since it does not use getViaTmp, so verification doesn't happen twice when using fsck --from.
* refactorGravatar Joey Hess2015-10-01
|
* avoid deprecation warnings when built with http-client >= 0.4.18Gravatar Joey Hess2015-10-01
| | | | | | | | | | Since I want git-annex to keep building on debian stable, I need to still support the old http-client, which required explicit calls to closeManager, or use of withManager to get Managers to close at appropriate times. This is not needed in the new version, and so they added a deprecation warning. IMHO much too early, because look at the mess I had to go through to avoid that deprecation warning while supporting both versions..
* avoid hard dependency on new version of awsGravatar Joey Hess2015-09-22
|
* S3 storage classes expansionGravatar Joey Hess2015-09-17
| | | | | | | | | | Added support for storageclass=STANDARD_IA to use Amazon's new Infrequently Accessed storage. Also allows using storageclass=NEARLINE to use Google's NearLine storage. The necessary changes to aws to support this are in https://github.com/aristidb/aws/pull/176
* annex.hardlink extended to also try to use hard links when copying from the ↵Gravatar Joey Hess2015-09-14
| | | | | | | repository to a remote. Also, it used to only check that one of the repos was not in direct mode; now when either repo is direct mode, annex.hardlink won't have an effect.
* support gpg.programGravatar Joey Hess2015-09-09
| | | | | | When gpg.program is configured, it's used to get the command to run for gpg. Useful on systems that have only a gpg2 command or want to use it instead of the gpg command.
* disable whereisKey for encrypted or chunked remotesGravatar Joey Hess2015-08-19
| | | | | | | This only makes sense for public repos, that are not chunked, so that there's a 1:1 from Key in the git-annex repo to file on the remote. Rather than making every remote implementation deal with that, just disable whereisKey when it doesn't make sense.
* make whereis show urls when web remote does not have contentGravatar Joey Hess2015-08-17
| | | | This is needed when external special remotes register an url for a key.
* External special remotes can now be built that can be used in readonly mode, ↵Gravatar Joey Hess2015-08-17
| | | | | | | | | | | | | | | | | where git-annex downloads content from the remote using regular http. Note that, if an url is added to the web log for such a remote, it's not distinguishable from another url that might be added for the web remote. (Because the web log doesn't distinguish which remote owns a plain url. Urls with a downloader set are distinguishable, but we're not using them here.) This seems ok-ish.. In such a case, both remotes will try to use both urls, and both remotes should be able to. The only issue I see is that dropping a file from the web remote will remove both urls in this case. This is not often done, and could even be considered a feature, I suppose.
* export some always failing methods for readonly remotesGravatar Joey Hess2015-08-17
|
* refactorGravatar Joey Hess2015-08-17
|
* refactorGravatar Joey Hess2015-08-17
|
* Added WHEREIS to external special remote protocol.Gravatar Joey Hess2015-08-13
|
* add some debugs to get timingsGravatar Joey Hess2015-08-13
| | | | | | | Note that I had one in Annex.Action.startup too, but it resulted in a weird message printed by ssh, "channel 2: bad ext data". I don't know why, but it only happened when transferinfo was run, so I wonder if 1d71ad072e13c8ed1cb8b34367b57d59e651f0a9 introduced a fragility somehow.
* --debug is passed along to git-annex-shell when git-annex is in debug mode.Gravatar Joey Hess2015-08-13
|
* Sped up downloads of files from ssh remotes, reducing the non-data-transfer ↵Gravatar Joey Hess2015-08-13
| | | | overhead 6x.
* remove debug printGravatar Joey Hess2015-08-13
|
* Added support for SHA3 hashed keys (in 8 varieties), when git-annex is built ↵Gravatar Joey Hess2015-08-06
| | | | | | | | using the cryptonite library. While cryptohash has SHA3 support, it has not been updated for the final version of the spec. Note that cryptonite has not been ported to all arches that cryptohash builds on yet.
* Simplify setup process for a ssh remote.Gravatar Joey Hess2015-08-05
| | | | | | | | | | | | | | | | | | | | | | Now it suffices to run git remote add, followed by git-annex sync. Now the remote is automatically initialized for use by git-annex, where before the git-annex branch had to manually be pushed before using git-annex sync. Note that this involved changes to git-annex-shell, so if the remote is using an old version, the manual push is still needed. Implementation required git-annex-shell be changed, so configlist can autoinit a repository even when no git-annex branch has been pushed yet. Unfortunate because we'll have to wait for it to get deployed to servers before being able to rely on this change in the documentation. Did consider making git-annex sync push the git-annex branch to repos that didn't have a uuid, but this seemed difficult to do without complicating it in messy ways. It would be cleaner to split a command out from configlist to handle the initialization. But this is difficult without sacrificing backwards compatability, for users of old git-annex versions which would not use the new command.
* remove unused importsGravatar Joey Hess2015-07-31
|
* Fix rsync special remote to work when -Jn is used for concurrent uploads.Gravatar Joey Hess2015-07-30
|
* Fix bug that prevented uploads to remotes using new-style chunking from ↵Gravatar Joey Hess2015-07-16
| | | | | | | | | | | | | | | | resuming after the last successfully uploaded chunk. "checkPresent baser" was wrong; the baser has a dummy checkPresent action not the real one. So, to fix this, we need to call preparecheckpresent to get a checkpresent action that can be used to check if chunks are present. Note that, for remotes like S3, this means that the preparer is run, which opens a S3 handle, that will be used for each checkpresent of a chunk. That's a good thing; if we're resuming an upload that's already many chunks in, it'll reuse that same http connection for each chunk it checks. Still, it's not a perfectly ideal thing, since this is a different http connection that the one that will be used to upload chunks. It would be nice to improve the API so that both use the same http connection.
* layoutGravatar Joey Hess2015-06-15
|
* tahoe: Use ~/.tahoe-git-annex/ rather than ~/.tahoe/git-annex/ to avoid old ↵Gravatar Joey Hess2015-06-09
| | | | versions of tahoe create-client choking.
* show S3 urls for public repos in whereisGravatar Joey Hess2015-06-05
| | | | | | Note that it's possible for a S3 bucket to be configured to allow public access, but for git-annex to not know that it is. I chose to not show the url unless public=yes.
* S3: Publically accessible buckets can be used without creds.Gravatar Joey Hess2015-06-05
|
* public=yes config to send AclPublicReadGravatar Joey Hess2015-06-05
| | | | | | | | | | | In my tests, this has to be set when uploading a file to the bucket and then the file can be accessed using the bucketname.s3.amazonaws.com url. Setting it when creating the bucket didn't seem to make the whole bucket public, or allow accessing files stored in it. But I have gone ahead and also sent it when creating the bucket just in case that is needed in some case.
* groundwork for readonly accessGravatar Joey Hess2015-06-05
| | | | Split S3Info out of S3Handle and added some stubs
* remove Params constructor from Utility.SafeCommandGravatar Joey Hess2015-06-01
| | | | | | | | | | | | | | | | | | This removes a bit of complexity, and should make things faster (avoids tokenizing Params string), and probably involve less garbage collection. In a few places, it was useful to use Params to avoid needing a list, but that is easily avoided. Problems noticed while doing this conversion: * Some uses of Params "oneword" which was entirely unnecessary overhead. * A few places that built up a list of parameters with ++ and then used Params to split it! Test suite passes.
* fromkey, registerurl: Allow urls to be specified instead of keys, and ↵Gravatar Joey Hess2015-05-22
| | | | | | | | generate URL keys. This is especially useful because the caller doesn't need to generate valid url keys, which involves some escaping of characters, and may involve taking a md5sum of the url if it's too long.
* use lock pools throughout git-annexGravatar Joey Hess2015-05-19
| | | | | | | | | | | | | The one exception is in Utility.Daemon. As long as a process only daemonizes once, which seems reasonable, and as long as it avoids calling checkDaemon once it's already running as a daemon, the fcntl locking gotchas won't be a problem there. Annex.LockFile has it's own separate lock pool layer, which has been renamed to LockCache. This is a persistent cache of locks that persist until closed. This is not quite done; lockContent stil needs to be converted.
* Avoid accumulating transfer failure log files unless the assistant is being ↵Gravatar Joey Hess2015-05-12
| | | | | | | | | | | | used. Only the assistant uses these, and only the assistant cleans them up, so make only git annex transferkeys write them, There is one behavior change from this. If glacier is being used, and a manual git annex get --from glacier fails because the file isn't available yet, the assistant will no longer later see that failed transfer file and retry the get. Hope no-one depended on that old behavior.
* Take space that will be used by running downloads into account when checking ↵Gravatar Joey Hess2015-05-12
| | | | annex.diskreserve.
* Merge branch 'master' into concurrentprogressGravatar Joey Hess2015-05-12
|\ | | | | | | | | | | | | | | | | | | | | | | Conflicts: Command/Fsck.hs Messages.hs Remote/Directory.hs Remote/Git.hs Remote/Helper/Special.hs Types/Remote.hs debian/changelog git-annex.cabal
| * generalied elem/notElem in ghc 7.10 require some additional type signatures ↵Gravatar Joey Hess2015-05-10
| | | | | | | | when using OverloadedStrings
| * remaining dataenc to sandi conversionsGravatar Joey Hess2015-05-07
| | | | | | | | | | | | I've tested all the dataenc to sandi conversions except Assistant.XMPP, and all have unchanged behavior, including behavior on large unicode code points.
| * S3: Fix incompatability with bucket names used by hS3; the aws library ↵Gravatar Joey Hess2015-04-27
| | | | | | | | | | | | | | | | | | cannot handle upper-case bucket names. git-annex now converts them to lower case automatically. For example, it failed to get files from a bucket named S3. Also fixes `git annex initremote UPPERCASE type=S3`, which failed with the new aws library, with a signing error message.
| * Fix bogus failure of fsck --fast.Gravatar Joey Hess2015-04-27
| |
| * S3: git annex enableremote will not create a bucket name, which failed since ↵Gravatar Joey Hess2015-04-23
| | | | | | | | the bucket already exists.
| * S3: git annex info will show additional information about a S3 remote ↵Gravatar Joey Hess2015-04-23
| | | | | | | | (endpoint, port, storage class)