summaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAge
* add ContentSource type, for remotes that act on files rather than ByteStringsGravatar Joey Hess2014-07-29
| | | | | Note that currently nothing cleans up a ContentSource's file, when eg, retrieving chunks.
* fix non-checked hasKeyChunksGravatar Joey Hess2014-07-29
|
* make explicit the implicit requirement that CHECKPRESENT not say a key is ↵Gravatar Joey Hess2014-07-28
| | | | present until it's all done being stored
* resume interrupted chunked uploadsGravatar Joey Hess2014-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Leverage the new chunked remotes to automatically resume uploads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also allow starting an upload from one repository, interrupting it, and then resuming the upload to the same remote from an entirely different repository. Note that I added a comment that storeKey should atomically move the content into place once it's all received. This was already an undocumented requirement -- it's necessary for hasKey to work reliably. This resume code just uses hasKey to find the first chunk that's missing. Note that if there are two uploads of the same key to the same chunked remote, one might resume at the point the other had gotten to, but both will then redundantly upload. As before. In the non-resume case, this adds one hasKey call per storeKey, and only if the remote is configured to use chunks. Future work: Try to eliminate that hasKey. Notice that eg, `git annex copy --to` checks if the key is present before sending it, so is already running hasKey.. which could perhaps be cached and reused. However, this additional overhead is not very large compared with transferring an entire large file, and the ability to resume is certianly worth it. There is an optimisation in place for small files, that avoids trying to resume if the whole file fits within one chunk. This commit was sponsored by Georg Bauer.
* fix handling of removal of keys that are not presentGravatar Joey Hess2014-07-28
|
* add ChunkMethod type and make Logs.Chunk use it, rather than assuming fixed ↵Gravatar Joey Hess2014-07-28
| | | | | | | | size chunks (so eg, rolling hash chunks can be supported later) If a newer git-annex starts logging something else in the chunk log, it won't be used by this version, but it will be preserved when updating the log.
* Merge branch 'master' of ssh://git-annex.branchable.com into newchunksGravatar Joey Hess2014-07-28
|\
| * (no commit message)Gravatar divB2014-07-27
| |
| * (no commit message)Gravatar divB2014-07-27
| |
| * devblogGravatar Joey Hess2014-07-27
| |
* | resume interrupted chunked downloadsGravatar Joey Hess2014-07-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Leverage the new chunked remotes to automatically resume downloads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also properly handle starting a download from one remote, interrupting, and resuming from another one, and so on. (Resuming interrupted chunked uploads is similarly doable, although slightly more expensive.) This commit was sponsored by Thomas Djärv.
* | add key stability checking interfaceGravatar Joey Hess2014-07-27
| | | | | | | | | | | | | | Needed for resuming from chunks. Url keys are considered not stable. I considered treating url keys with a known size as stable, but just don't feel that is enough information.
* | use map for faster backend name lookupGravatar Joey Hess2014-07-27
| |
* | Merge branch 'master' into newchunksGravatar Joey Hess2014-07-27
|\| | | | | | | | | Conflicts: doc/design/assistant/chunks.mdwn
| * updateGravatar Joey Hess2014-07-27
| |
* | use existing chunks even when chunk=0Gravatar Joey Hess2014-07-27
| | | | | | | | | | | | | | | | | | | | When chunk=0, always try the unchunked key first. This avoids the overhead of needing to read the git-annex branch to find the chunkcount. However, if the unchunked key is not present, go on and try the chunks. Also, when removing a chunked key, update the chunkcounts even when chunk=0.
* | reorgGravatar Joey Hess2014-07-27
| |
* | comment typoGravatar Joey Hess2014-07-27
| |
* | faster storeChunksGravatar Joey Hess2014-07-27
| | | | | | | | | | | | | | | | | | | | | | No need to process each L.ByteString chunk, instead ask it to split. Doesn't seem to have really sped things up much, but it also made the code simpler. Note that this does (and already did) buffer in memory. It seems that only the directory special remote could take advantage of streaming chunks to files w/o buffering, so probably won't add an interface to allow for that.
* | better Preparer interfaceGravatar Joey Hess2014-07-27
| | | | | | | | | | | | | | This will allow things like WebDAV to opean a single persistent connection and reuse it for all the chunked data. The crazy types allow for some nice code reuse.
* | update does for chunkingGravatar Joey Hess2014-07-26
| |
* | improve exception handlingGravatar Joey Hess2014-07-26
| | | | | | | | | | | | | | | | Push it down from needing to be done in every Storer, to being checked once inside ChunkedEncryptable. Also, catch exceptions from PrepareStorer and PrepareRetriever, just in case..
* | add some more exception handling primitivesGravatar Joey Hess2014-07-26
| |
* | better exception displayGravatar Joey Hess2014-07-26
| |
* | fix key checking when a directory special remote's directory is missingGravatar Joey Hess2014-07-26
| | | | | | | | | | The best thing to do in this case is return Left, so that anything that tries to access it will fail.
* | fix another fallback bugGravatar Joey Hess2014-07-26
| |
* | allM has slightly better memory useGravatar Joey Hess2014-07-26
| |
* | fix fallback to other chunk size when first does not have itGravatar Joey Hess2014-07-26
| |
| * Merge branch 'master' of ssh://git-annex.branchable.comGravatar Joey Hess2014-07-26
| |\
| * | devblogGravatar Joey Hess2014-07-26
| | |
* | | doc update for new chunkingGravatar Joey Hess2014-07-26
| | |
* | | fix buildGravatar Joey Hess2014-07-26
| | |
* | | fix buildGravatar Joey Hess2014-07-26
| | |
* | | convert directory special remote to using ChunkedEncryptableGravatar Joey Hess2014-07-26
| | | | | | | | | | | | | | | | | | | | | | | | And clean up legacy chunking code, which is in its own module now. So much cleaner! This commit was sponsored by Henrik Ahlgren
* | | Support for remotes that are chunkable and encryptable.Gravatar Joey Hess2014-07-26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I'd have liked to keep these two concepts entirely separate, but that are entagled: Storing a key in an encrypted and chunked remote need to generate chunk keys, encrypt the keys, chunk the data, encrypt the chunks, and send them to the remote. Similar for retrieval, etc. So, here's an implemnetation of all of that. The total win here is that every remote was implementing encrypted storage and retrival, and now it can move into this single place. I expect this to result in several hundred lines of code being removed from git-annex eventually! This commit was sponsored by Henrik Ahlgren.
* | | finish up basic chunked remote groundworkGravatar Joey Hess2014-07-26
| | | | | | | | | | | | | | | | | | | | | Chunk retrieval and reassembly, removal, and checking if all necessary chunks are present. This commit was sponsored by Damien Raude-Morvan.
* | | wordingGravatar Joey Hess2014-07-26
| | |
| | * added output of ls -lb in git directory to show that the file is not added ↵Gravatar https://www.google.com/accounts/o8/id?id=AItOawmURXBzaYE1gmVc-X9eLAyDat_6rHPl6702014-07-26
| | | | | | | | | | | | to the annex
| | * (no commit message)Gravatar https://www.google.com/accounts/o8/id?id=AItOawmURXBzaYE1gmVc-X9eLAyDat_6rHPl6702014-07-26
| | |
* | | reorgGravatar Joey Hess2014-07-26
| | |
* | | Merge branch 'master' into newchunksGravatar Joey Hess2014-07-26
|\| |
| | * Added a commentGravatar https://www.google.com/accounts/o8/id?id=AItOawk9nck8WX8-ADF3Fdh5vFo4Qrw1I_bJcR82014-07-26
| |/
| * Merge branch 'master' of ssh://git-annex.branchable.comGravatar Joey Hess2014-07-25
| |\
| * | devblogGravatar Joey Hess2014-07-25
| | |
| * | Fix cost calculation for non-encrypted remotes.Gravatar Joey Hess2014-07-25
| | | | | | | | | | | | | | | | | | Encyptable types of remotes that were not actually encrypted still had the encryptedRemoteCostAdj applied to their configured cost, which was a bug.
* | | support new style chunking in directory special remoteGravatar Joey Hess2014-07-25
| | | | | | | | | | | | | | | | | | | | | Only when storing non-encrypted so far, not retrieving or checking if a key is present or removing. This commit was sponsored by Renaud Casenave-Péré.
* | | core implementation of new style chunkingGravatar Joey Hess2014-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Not yet used by any special remotes, but should not be too hard to add it to most of them. storeChunks is the hairy bit! It's loosely based on Remote.Directory.storeLegacyChunked. The object is read in using a lazy bytestring, which is streamed though, creating chunks as needed, without ever buffering more than 1 chunk in memory. Getting the progress meter update to work right was also fun, since progress meter values are absolute. Finessed by constructing an offset meter. This commit was sponsored by Richard Collins.
* | | use same hash directories for chunked key as are used for its parentGravatar Joey Hess2014-07-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This avoids a proliferation of hash directories when using new-style chunking, and should improve performance since chunks are accessed in sequence and so should have a common locality. Of course, when a chunked key is encrypted, its hash directories have no relation to the parent key. This commit was sponsored by Christian Kellermann.
* | | thought about chunk key hashingGravatar Joey Hess2014-07-25
|/ /
| * Added a commentGravatar https://www.google.com/accounts/o8/id?id=AItOawlVUq_c3-lrQBculOEUu3yjvdavE7JbvEI2014-07-25
| |