summaryrefslogtreecommitdiff
path: root/Remote/Helper
Commit message (Collapse)AuthorAge
* deal with old repositories with non-encrypted credsGravatar Joey Hess2014-09-18
| | | | | | | | | | | | | | | | | See 2fb7ad68637cc4e1092f835055a974f141808ca0 for backstory about how a repo could be in this state. When decryption fails, the repo must be using non-encrypted creds. Note that creds are encrypted/decrypted using the encryption cipher which is stored in the repo, so the decryption cannot fail due to missing gpg keys etc. (For !shared encryptiom, the cipher is iteself encrypted using some gpg key(s), and the decryption of the cipher happens earlier, so not affected by this change. Print a warning message for !shared repos, and continue on using the cipher. Wrote a page explaining what users hit by this bug should do. This commit was sponsored by Samuel Tardieu.
* glacier, S3: Fix bug that caused embedded creds to not be encypted using the ↵Gravatar Joey Hess2014-09-18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | remote's key. encryptionSetup must be called before setRemoteCredPair. Otherwise, the RemoteConfig doesn't have the cipher in it, and so no cipher is used to encrypt the embedded creds. This is a security fix for non-shared encryption methods! For encryption=shared, there's no security problem, just an inconsistentency in whether the embedded creds are encrypted. This is very important to get right, so used some types to help ensure that setRemoteCredPair is only run after encryptionSetup. Note that the external special remote bypasses the type safety, since creds can be set after the initial remote config, if the external special remote program requests it. Also note that IA remotes never use encryption, so encryptionSetup is not run for them at all, and again the type safety is bypassed. This leaves two open questions: 1. What to do about S3 and glacier remotes that were set up using encryption=pubkey/hybrid with embedcreds? Such a git repo has a security hole embedded in it, and this needs to be communicated to the user. Is the changelog enough? 2. enableremote won't work in such a repo, because git-annex will try to decrypt the embedded creds, which are not encrypted, so fails. This needs to be dealt with, especially for ecryption=shared repos, which are not really broken, just inconsistently configured. Noticing that problem for encryption=shared is what led to commit cc54ff9e49260cd94f938e69e926a273e231ef4e, which tried to fix the problem by not decrypting the embedded creds. This commit was sponsored by Josh Taylor.
* Revert "S3, Glacier, WebDAV: Fix bug that prevented accessing the creds when ↵Gravatar Joey Hess2014-09-18
| | | | | | | | | | the repository was configured with encryption=shared embedcreds=yes." This reverts commit cc54ff9e49260cd94f938e69e926a273e231ef4e. I can find no basis for that commit and think that I made it in error. setRemoteCredPair always encrypts using the cipher from remoteCipher, even when the cipher is shared.
* more lock file refactoringGravatar Joey Hess2014-08-20
| | | | | | | | Also fixes a test suite failures introduced in recent commits, where inAnnexSafe failed in indirect mode, since it tried to open the lock file ReadWrite. This is why the new checkLocked opens it ReadOnly. This commit was sponsored by Chad Horohoe.
* reorganize and refactor lock codeGravatar Joey Hess2014-08-20
| | | | | | | | Added a convenience Utility.LockFile that is not a windows/posix portability shim, but still manages to cut down on the boilerplate around locking. This commit was sponsored by Johan Herland.
* forgot some liftsGravatar Joey Hess2014-08-20
|
* Ensure that all lock fds are close-on-exec, fixing various problems with ↵Gravatar Joey Hess2014-08-20
| | | | | | | | | | | | | | | | | | them being inherited by child processes such as git commands. (With the exception of daemon pid locking.) This fixes at part of #758630. I reproduced the assistant locking eg, a removable drive's annex journal lock file and forking a long-running git-cat-file process that inherited that lock. This did not affect Windows. Considered doing a portable Utility.LockFile layer, but git-annex uses posix locks in several special ways that have no direct Windows equivilant, and it seems like it would mostly be a complication. This commit was sponsored by Protonet.
* S3, Glacier, WebDAV: Fix bug that prevented accessing the creds when the ↵Gravatar Joey Hess2014-08-12
| | | | | | | | | | | repository was configured with encryption=shared embedcreds=yes. Since encryption=shared, the encryption key is stored in the git repo, so there is no point at all in encrypting the creds, also stored in the git repo with that key. So `initremote` doesn't. The creds are simply stored base-64 encoded. However, it then tried to always decrypt creds when encryption was used..
* testremote: Add testing of behavior when remote is not availableGravatar Joey Hess2014-08-10
| | | | | | | | | | | | | | | | | | | | Added a mkUnavailable method, which a Remote can use to generate a version of itself that is not available. Implemented for several, but not yet all remotes. This allows testing that checkPresent properly throws an exceptions when it cannot check if a key is present or not. It also allows testing that the other methods don't throw exceptions in these circumstances. This immediately found several bugs, which this commit also fixes! * git remotes using ssh accidentially had checkPresent return an exception, rather than throwing it * The chunking code accidentially returned False rather than propigating an exception when there were no chunks and checkPresent threw an exception for the non-chunked key. This commit was sponsored by Carlo Matteo Capocasa.
* fix checkPresent error handling for non-present local git reposGravatar Joey Hess2014-08-08
| | | | guardUsable r (error "foo") *returned* an error, rather than throwing it
* check for 200 responseGravatar Joey Hess2014-08-08
|
* WebDAV: Avoid buffering whole file in memory when downloading.Gravatar Joey Hess2014-08-08
| | | | | | httpBodyRetriever will later also be used by S3 This commit was sponsored by Ethan Aubin.
* unify exception handling into Utility.ExceptionGravatar Joey Hess2014-08-07
| | | | | | | | | | | | | | | | | | | | Removed old extensible-exceptions, only needed for very old ghc. Made webdav use Utility.Exception, to work after some changes in DAV's exception handling. Removed Annex.Exception. Mostly this was trivial, but note that tryAnnex is replaced with tryNonAsync and catchAnnex replaced with catchNonAsync. In theory that could be a behavior change, since the former caught all exceptions, and the latter don't catch async exceptions. However, in practice, nothing in the Annex monad uses async exceptions. Grepping for throwTo and killThread only find stuff in the assistant, which does not seem related. Command.Add.undo is changed to accept a SomeException, and things that use it for rollback now catch non-async exceptions, rather than only IOExceptions.
* WebDAV: Avoid buffering whole file in memory when uploading.Gravatar Joey Hess2014-08-07
| | | | | | The httpStorer will later also be used by S3. This commit was sponsored by Torbjørn Thorsen.
* convert WebDAV to new special remote interface, adding new-style chunking ↵Gravatar Joey Hess2014-08-06
| | | | | | | | | | | | | | | | | | | | | | | support Reusing http connection when operating on chunks is not done yet, I had to submit some patches to DAV to support that. However, this is no slower than old-style chunking was. Note that it's a fileRetriever and a fileStorer, despite DAV using bytestrings that would allow streaming. As a result, upload/download of encrypted files is made a bit more expensive, since it spools them to temp files. This was needed to get the progress meters to work. There are probably ways to avoid that.. But it turns out that the current DAV interface buffers the whole file content in memory, and I have sent in a patch to DAV to improve its interfaces. Using the new interfaces, it's certainly going to need to be a fileStorer, in order to read the file size from the file (getting the size of a bytestring would destroy laziness). It should be possible to use the new interface to make it be a byteRetriever, so I'll change that when I get to it. This commit was sponsored by Andreas Olsson.
* run Preparer to get Remover and CheckPresent actionsGravatar Joey Hess2014-08-06
| | | | | | | | | | | | | | | | | | | | | | | | This will allow special remotes to eg, open a http connection and reuse it, while checking if chunks are present, or removing chunks. S3 and WebDAV both need this to support chunks with reasonable speed. Note that a special remote might want to cache a http connection across multiple requests. A simple case of this is that CheckPresent is typically called before Store or Remove. A remote using this interface can certianly use a Preparer that eg, uses a MVar to cache a http connection. However, it's up to the remote to then deal with things like stale or stalled http connections when eg, doing a series of downloads from a remote and other places. There could be long delays between calls to a remote, which could lead to eg, http connection stalls; the machine might even move to a new network, etc. It might be nice to improve this interface later to allow the simple case without needing to handle the full complex case. One way to do it would be to have a `Transaction SpecialRemote cache`, where SpecialRemote contains methods for Storer, Retriever, Remover, and CheckPresent, that all expect to be passed a `cache`.
* pushed checkPresent exception handling out of Remote implementationsGravatar Joey Hess2014-08-06
| | | | | | | | | | | | | | | | I tend to prefer moving toward explicit exception handling, not away from it, but in this case, I think there are good reasons to let checkPresent throw exceptions: 1. They can all be caught in one place (Remote.hasKey), and we know every possible exception is caught there now, which we didn't before. 2. It simplified the code of the Remotes. I think it makes sense for Remotes to be able to be implemented without needing to worry about catching exceptions inside them. (Mostly.) 3. Types.StoreRetrieve.Preparer can only work on things that return a Bool, which all the other relevant remote methods already did. I do not see a good way to generalize that type; my previous attempts failed miserably.
* finally properly fixed ssh zombie leakGravatar Joey Hess2014-08-03
| | | | | The leak was caused by the thread that sshd'd to send transferinfo not waiting on its ssh. Doh.
* move ugly rsync zombie workaroundGravatar Joey Hess2014-08-03
| | | | | | | | | | | | This reaping of any processes came to cause me problems when redoing the rsync special remote -- a gpg process that was running gets waited on and the place that then checks its return code fails. I cannot reproduce any zombies when using the rsync special remote. But I still can when using a normal git remote, accessed over ssh. There is 1 zombie per file downloaded without this horrible hack enabled. So, move the hack to only be used in that case.
* remove redundant progress meter display codeGravatar Joey Hess2014-08-03
| | | | specialRemote handles all meter display, so this is redundant.
* roll ChunkedEncryptable into Special and improve interfaceGravatar Joey Hess2014-08-03
| | | | Allow disabling progress displays, for eg, rsync.
* whitespaceGravatar Joey Hess2014-08-03
|
* better byteRetrieverGravatar Joey Hess2014-08-03
| | | | | | | | | | | | | | Make the byteRetriever be passed the callback that consumes the bytestring. This way, there's no worries about the lazy bytestring not all being read when the resource that's creating it is closed. Which in turn lets bup, ddar, and S3 each switch from using an unncessary fileRetriver to a byteRetriever. So, more efficient on chunks and encrypted files. The only remaining fileRetrievers are hook and external, which really do retrieve to files.
* convert glacier to new ChunkedEncryptable API (but do not support chunking)Gravatar Joey Hess2014-08-02
| | | | | | | | Chunking would complicate the assistant's code that checks when a pending retrieval of a key from glacier is done. It would perhaps be nice to support it to allow resuming, but not right now. Converting to the new API still simplifies the code.
* specialize Preparer a bit, so resourcePrepare can be addedGravatar Joey Hess2014-08-02
| | | | | | | The forall a. in Preparer made resourcePrepare not seem to be usable, so I specialized a to Bool. Which works for both Preparer Storer and Preparer Retriever, but wouldn't let the Preparer be used for hasKey as it currently stands.
* minor optimisationGravatar Joey Hess2014-08-01
|
* testremote: Test retrieveKeyFile resumeGravatar Joey Hess2014-08-01
| | | | | | | And fixed a bug found by these tests; retrieveKeyFile would fail when the dest file was already complete. This commit was sponsored by Bradley Unterrheiner.
* fix a fenchpost bug when resuming chunked store at endGravatar Joey Hess2014-08-01
| | | | Discovered thanks to testremote command!
* fix chunk=0Gravatar Joey Hess2014-08-01
| | | | Found by testremote
* only chunk stable keysGravatar Joey Hess2014-07-30
| | | | | | The content of unstable keys can potentially be different in different repos, so eg, resuming a chunked upload started by another repo would corrupt data.
* update progress after each chunk, at leastGravatar Joey Hess2014-07-29
| | | | | | This way, when the remote implementation neglects to update progress, there will still be a somewhat useful progress display, as long as chunks are used.
* fix cleanup of FileContents once done when them when retrievingGravatar Joey Hess2014-07-29
|
* optimise case of remote that retrieves FileContent, when chunks and ↵Gravatar Joey Hess2014-07-29
| | | | | | | | | | encryption are not being used No need to read whole FileContent only to write it back out to a file in this case. Can just rename! Yay. Also indidentially, fixed an attempt to open a file for write that was already opened for write, which caused a crash and deadlock.
* support chunking for all external special remotes!Gravatar Joey Hess2014-07-29
| | | | | | | Removing code and at the same time adding great features, including upload/download resuming. This commit was sponsored by Romain Lenglet.
* better type for RetrieverGravatar Joey Hess2014-07-29
| | | | | | | | Putting a callback in the Retriever type allows for the callback to remove the retrieved file when it's done with it. I did not really want to make Retriever be fixed to Annex Bool, but when I tried to use Annex a, I got into some type of type mess.
* allow Retriever action to update the progress meterGravatar Joey Hess2014-07-29
| | | | | | | | Needed for eg, Remote.External. Generally, any Retriever that stores content in a file is responsible for updating the meter, while ones that procude a lazy bytestring cannot update the meter, so are not asked to.
* lift types from IO to AnnexGravatar Joey Hess2014-07-29
| | | | | | | | | | | Some remotes like External need to run store and retrieve actions in Annex, not IO. In order to do that lift, I had to dive pretty deep into the utilities, making Utility.Gpg and Utility.Tmp be partly converted to using MonadIO, and Control.Monad.Catch for exception handling. There should be no behavior changes in this commit. This commit was sponsored by Michael Barabanov.
* add ContentSource type, for remotes that act on files rather than ByteStringsGravatar Joey Hess2014-07-29
| | | | | Note that currently nothing cleans up a ContentSource's file, when eg, retrieving chunks.
* fix non-checked hasKeyChunksGravatar Joey Hess2014-07-29
|
* resume interrupted chunked uploadsGravatar Joey Hess2014-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Leverage the new chunked remotes to automatically resume uploads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also allow starting an upload from one repository, interrupting it, and then resuming the upload to the same remote from an entirely different repository. Note that I added a comment that storeKey should atomically move the content into place once it's all received. This was already an undocumented requirement -- it's necessary for hasKey to work reliably. This resume code just uses hasKey to find the first chunk that's missing. Note that if there are two uploads of the same key to the same chunked remote, one might resume at the point the other had gotten to, but both will then redundantly upload. As before. In the non-resume case, this adds one hasKey call per storeKey, and only if the remote is configured to use chunks. Future work: Try to eliminate that hasKey. Notice that eg, `git annex copy --to` checks if the key is present before sending it, so is already running hasKey.. which could perhaps be cached and reused. However, this additional overhead is not very large compared with transferring an entire large file, and the ability to resume is certianly worth it. There is an optimisation in place for small files, that avoids trying to resume if the whole file fits within one chunk. This commit was sponsored by Georg Bauer.
* add ChunkMethod type and make Logs.Chunk use it, rather than assuming fixed ↵Gravatar Joey Hess2014-07-28
| | | | | | | | size chunks (so eg, rolling hash chunks can be supported later) If a newer git-annex starts logging something else in the chunk log, it won't be used by this version, but it will be preserved when updating the log.
* resume interrupted chunked downloadsGravatar Joey Hess2014-07-27
| | | | | | | | | | | | | | | | | | Leverage the new chunked remotes to automatically resume downloads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also properly handle starting a download from one remote, interrupting, and resuming from another one, and so on. (Resuming interrupted chunked uploads is similarly doable, although slightly more expensive.) This commit was sponsored by Thomas Djärv.
* use existing chunks even when chunk=0Gravatar Joey Hess2014-07-27
| | | | | | | | | | When chunk=0, always try the unchunked key first. This avoids the overhead of needing to read the git-annex branch to find the chunkcount. However, if the unchunked key is not present, go on and try the chunks. Also, when removing a chunked key, update the chunkcounts even when chunk=0.
* reorgGravatar Joey Hess2014-07-27
|
* comment typoGravatar Joey Hess2014-07-27
|
* faster storeChunksGravatar Joey Hess2014-07-27
| | | | | | | | | | | No need to process each L.ByteString chunk, instead ask it to split. Doesn't seem to have really sped things up much, but it also made the code simpler. Note that this does (and already did) buffer in memory. It seems that only the directory special remote could take advantage of streaming chunks to files w/o buffering, so probably won't add an interface to allow for that.
* better Preparer interfaceGravatar Joey Hess2014-07-27
| | | | | | | This will allow things like WebDAV to opean a single persistent connection and reuse it for all the chunked data. The crazy types allow for some nice code reuse.
* improve exception handlingGravatar Joey Hess2014-07-26
| | | | | | | | Push it down from needing to be done in every Storer, to being checked once inside ChunkedEncryptable. Also, catch exceptions from PrepareStorer and PrepareRetriever, just in case..
* better exception displayGravatar Joey Hess2014-07-26
|
* fix another fallback bugGravatar Joey Hess2014-07-26
|