aboutsummaryrefslogtreecommitdiff
path: root/Types/Remote.hs
Commit message (Collapse)AuthorAge
* split out Types.ExportGravatar Joey Hess2017-09-15
|
* avoid unncessary db queries when exported directory can't be emptyGravatar Joey Hess2017-09-15
| | | | | | In rename foo/bar to foo/baz, foo can't be empty. In delete zxyyz, there's no exported directory (top doesn't count).
* remove empty directories when removing from exportGravatar Joey Hess2017-09-15
| | | | | | | | | | | | | | | The subtle part of this is what happens when the remote fails to remove an empty directory. The removal from the export needs to fail in that case, so the removal will be tried again later. However, removeExportLocation has already been run and changed the export db, so if the next run checks getExportLocation, it might decide nothing remains to be done, leaving the empty directory. Dealt with that by making removeEmptyDirectories, handle a failure by calling addExportLocation, reverting the database changes so the next run will be guaranteed to try deleting the empty directory again. This commit was sponsored by Thomas Hochstein on Patreon.
* implement removeExportDirectoryGravatar Joey Hess2017-09-15
| | | | | | | | | | | | | | | Not yet called by Command.Export. WebDAV needs this to clean up empty collections. Also, example.sh turned out to not be cleaning up directories when removing content from them, so it made sense for it to use this. Remote.Directory did not need it, and since its cleanup method for empty directories is more efficient than what Command.Export will need to do to find empty directories, it uses Nothing so that extra work can be avoided. This commit was sponsored by Thom May on Patreon.
* export: cache connections for S3 and webdavGravatar Joey Hess2017-09-12
|
* External special remote protocol extended to support export.Gravatar Joey Hess2017-09-08
| | | | | | Also updated example.sh to support export. This commit was supported by the NSF-funded DataLad project.
* prevent exporttree=yes on remotes that don't support exportsGravatar Joey Hess2017-09-07
| | | | | | | | | Don't allow "exporttree=yes" to be set when the special remote does not support exports. That would be confusing since the user would set up a special remote for exports, but `git annex export` to it would later fail. This commit was supported by the NSF-funded DataLad project.
* export file renamingGravatar Joey Hess2017-09-06
| | | | | | | | | | | | | | | | | This is seriously super hairy. It has to handle interrupted exports, which may be resumed with the same or a different tree. It also has to recover from export conflicts, which could cause the wrong content to be renamed to a file. I think this works, or is close to working. See the update to the design for how it works. This is definitely not optimal, in that it does more renames than are necessary. It would probably be worth finding the keys that are really renamed and only renaming those. But let's get the "simple" approach to work first.. This commit was supported by the NSF-funded DataLad project.
* git annex get from exportsGravatar Joey Hess2017-09-04
| | | | | | | | | | | | | | Straightforward enough, except for the needed belt-and-suspenders sanity checks to avoid foot shooting due to exports not being key/value stores. * Even when annex.verify=false, always verify from exports. * Only get files from exports that use a backend that supports checksum verification. * Never trust exports, even if the user says to, because then `git annex drop` would drop content if the export seemed to contain a copy. This commit was supported by the NSF-funded DataLad project.
* use export db to correctly handle duplicate filesGravatar Joey Hess2017-09-04
| | | | | | | | | | | | | | | | | Removed uncorrect UniqueKey key in db schema; a key can appear multiple times with different files. The database has to be flushed after each removal. But when adding files to the export, lots of changes are able to be queued up w/o flushing. So it's still fairly efficient. If large removals of files from exports are too slow, an alternative would be to make two passes over the diff, one pass queueing deletions from the database, then a flush and the a second pass updating the location log. But that would use more memory, and need to look up exportKey twice per removed file, so I've avoided such optimisation yet. This commit was supported by the NSF-funded DataLad project.
* implement exporttree=yes configurationGravatar Joey Hess2017-09-04
| | | | | | | | | | | | | | | | * Only export to remotes that were initialized to support it. * Prevent storing key/value on export remotes. * Prevent enabling exporttree=yes and encryption in the same remote. SetupStage Enable was changed to take the old RemoteConfig. This allowed only setting exporttree when initially setting up a remote, and not configuring it later after stuff might already be stored in the remote. Went with =yes rather than =true for consistency with other parts of git-annex. Changed docs accordingly. This commit was supported by the NSF-funded DataLad project.
* refactor ExportActionsGravatar Joey Hess2017-09-01
| | | | | | | | This will allow disabling exports for remotes that are not configured to allow them. Also, exportSupported will be useful for the external special remote to probe. This commit was supported by the NSF-funded DataLad project
* make storeExport atomicGravatar Joey Hess2017-08-31
| | | | | | | | | | | | | | | | | | This avoids needing to deal with the complexity of partially transferred files in the export. We'd not be able to resume uploading to such a file anyway, so just avoid them. The implementation in Remote.Directory is not completely ideal, because it could leave the temp file hanging around in the export directory. This only happens if it's killed with -9, or there's a power failure; normally viaTmp cleans up after itself, even when interrupted. I could not see a better way to do it though, since the export directory might be the root of a filesystem. Also some design thoughts on resuming, which depend on storeExport being atomic. This commit was sponsored by Fernando Jimenez on Partreon.
* provide file with content to exportGravatar Joey Hess2017-08-29
| | | | | | | | | | | | | Rather than providing the key to export, provide the file. When exporting a treeish that contains files that are not annexed, this will let the content of those files also be exported. There's still a Key in the interface; it will be used by the external special remote protocol. A SHA1 key can be used when exporting non-annexed files. This commit was sponsored by Brock Spratlen on Patreon.
* add API for exportingGravatar Joey Hess2017-08-29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implemented so far for the directory special remote. Several remotes don't make sense to export to. Regular Git remotes, obviously, do not. Bup remotes almost certianly do not, since bup would need to be used to extract the export; same store for Ddar. Web and Bittorrent are download-only. GCrypt is always encrypted so exporting to it would be pointless. There's probably no point complicating the Hook remotes with exporting at this point. External, S3, Glacier, WebDAV, Rsync, and possibly Tahoe should be modified to support export. Thought about trying to reuse the storeKey/retrieveKeyFile/removeKey interface, rather than adding a new interface. But, it seemed better to keep it separate, to avoid a complicated interface that sometimes encrypts/chunks key/value storage and sometimes users non-key/value storage. Any common parts can be factored out. Note that storeExport is not atomic. doc/design/exporting_trees_to_special_remotes.mdwn has some things in the "resuming exports" section that bear on this decision. Basically, I don't think, at this time, that an atomic storeExport would help with resuming, because exports are not key/value storage, and we can't be sure that a partially uploaded file is the same content we're currently trying to export. Also, note that ExportLocation will always use unix path separators. This is important, because users may export from a mix of windows and unix, and it avoids complicating the API with path conversions, and ensures that in such a mix, they always use the same locations for exports. This commit was sponsored by Bruno BEAUFILS on Patreon.
* add SetupStage parameter to RemoteType.setupGravatar Joey Hess2017-02-07
| | | | | | | | | | | | | | | | | Most remotes have an idempotent setup that can be reused for enableremote, but in a few cases, it needs to tell which, and whether a UUID was provided to setup was used. This is groundwork for making initremote be able to provide a UUID. It should not change any behavior. Note that it would be nice to make the UUID always be provided to setup, and make setup not need to generate and return a UUID. What prevented this simplification is Remote.Git.gitSetup, which needs to reuse the UUID of the git remote when setting it up, and so has to return that UUID. This commit was sponsored by Thom May on Patreon.
* Pass the various gnupg-options configs to gpg in several cases where they ↵Gravatar Joey Hess2016-05-23
| | | | | | | | | | | | were not before. Removed the instance LensGpgEncParams RemoteConfig because it encouraged code that does not take the RemoteGitConfig into account. RemoteType's setup was changed to take a RemoteGitConfig, although the only place that is able to provide a non-empty one is enableremote, when it's changing an existing remote. This led to several folow-on changes, and got RemoteGitConfig plumbed through.
* content locking during drop working for local git remotesGravatar Joey Hess2015-10-09
| | | | Only ssh remotes lack locking now
* finish and use lockContent interfaceGravatar Joey Hess2015-10-09
|
* support invalidating existing VerifiedCopysGravatar Joey Hess2015-10-08
|
* add removeKey action to RemoteGravatar Joey Hess2015-10-08
| | | | | Not implemented for any remotes yet; probably the git remote is the only one that will ever implement it.
* other 80% of avoding verification when hard linking to objects in shared repoGravatar Joey Hess2015-10-02
| | | | | | | | | | | | | | | | | | | | In c3b38fb2a075b4250e867ebd910324c65712c747, it actually only handled uploading objects to a shared repository. To avoid verification when downloading objects from a shared repository, was a lot harder. On the plus side, if the process of downloading a file from a remote is able to verify its content on the side, the remote can indicate this now, and avoid the extra post-download verification. As of yet, I don't have any remotes (except Git) using this ability. Some more work would be needed to support it in special remotes. It would make sense for tahoe to implicitly verify things downloaded from it; as long as you trust your tahoe server (which typically runs locally), there's cryptographic integrity. OTOH, despite bup being based on shas, a bup repo under an attacker's control could have the git ref used for an object changed, and so a bup repo shouldn't implicitly verify. Indeed, tahoe seems unique in being trustworthy enough to implicitly verify.
* Simplify setup process for a ssh remote.Gravatar Joey Hess2015-08-05
| | | | | | | | | | | | | | | | | | | | | | Now it suffices to run git remote add, followed by git-annex sync. Now the remote is automatically initialized for use by git-annex, where before the git-annex branch had to manually be pushed before using git-annex sync. Note that this involved changes to git-annex-shell, so if the remote is using an old version, the manual push is still needed. Implementation required git-annex-shell be changed, so configlist can autoinit a repository even when no git-annex branch has been pushed yet. Unfortunate because we'll have to wait for it to get deployed to servers before being able to rely on this change in the documentation. Did consider making git-annex sync push the git-annex branch to repos that didn't have a uuid, but this seemed difficult to do without complicating it in messy ways. It would be cleaner to split a command out from configlist to handle the initialization. But this is difficult without sacrificing backwards compatability, for users of old git-annex versions which would not use the new command.
* Merge branch 'master' into concurrentprogressGravatar Joey Hess2015-05-12
|\ | | | | | | | | | | | | | | | | | | | | | | Conflicts: Command/Fsck.hs Messages.hs Remote/Directory.hs Remote/Git.hs Remote/Helper/Special.hs Types/Remote.hs debian/changelog git-annex.cabal
| * commentGravatar Joey Hess2015-04-18
| |
* | add filename to progress bar, and display ok/failed at endGravatar Joey Hess2015-04-14
|/ | | | This needed plumbing an AssociatedFile through retrieveKeyFileCheap.
* update my email address and homepage urlGravatar Joey Hess2015-01-21
|
* Expand checkurl to support recommended filename, and multi-file-urlsGravatar Joey Hess2014-12-11
| | | | This commit was sponsored by an anonymous bitcoiner.
* Revert "let url claims optionally include a suggested filename"Gravatar Joey Hess2014-12-11
| | | | | | This reverts commit bc0bf97b20c48e1d1a35d25e2e76a311c102438c. Putting filename in the claim was a bad idea.
* let url claims optionally include a suggested filenameGravatar Joey Hess2014-12-11
|
* Urls can now be claimed by remotes. This will allow creating, for example, a ↵Gravatar Joey Hess2014-12-08
| | | | external special remote that handles magnet: and *.torrent urls.
* implement CLAIMURL for external special remoteGravatar Joey Hess2014-12-08
|
* add stub claimUrlGravatar Joey Hess2014-12-08
|
* add per-remote-type infoGravatar Joey Hess2014-10-21
| | | | | | | | | | Now `git annex info $remote` shows info specific to the type of the remote, for example, it shows the rsync url. Remote types that support encryption or chunking also include that in their info. This commit was sponsored by Ævar Arnfjörð Bjarmason.
* testremote: Add testing of behavior when remote is not availableGravatar Joey Hess2014-08-10
| | | | | | | | | | | | | | | | | | | | Added a mkUnavailable method, which a Remote can use to generate a version of itself that is not available. Implemented for several, but not yet all remotes. This allows testing that checkPresent properly throws an exceptions when it cannot check if a key is present or not. It also allows testing that the other methods don't throw exceptions in these circumstances. This immediately found several bugs, which this commit also fixes! * git remotes using ssh accidentially had checkPresent return an exception, rather than throwing it * The chunking code accidentially returned False rather than propigating an exception when there were no chunks and checkPresent threw an exception for the non-chunked key. This commit was sponsored by Carlo Matteo Capocasa.
* pushed checkPresent exception handling out of Remote implementationsGravatar Joey Hess2014-08-06
| | | | | | | | | | | | | | | | I tend to prefer moving toward explicit exception handling, not away from it, but in this case, I think there are good reasons to let checkPresent throw exceptions: 1. They can all be caught in one place (Remote.hasKey), and we know every possible exception is caught there now, which we didn't before. 2. It simplified the code of the Remotes. I think it makes sense for Remotes to be able to be implemented without needing to worry about catching exceptions inside them. (Mostly.) 3. Types.StoreRetrieve.Preparer can only work on things that return a Bool, which all the other relevant remote methods already did. I do not see a good way to generalize that type; my previous attempts failed miserably.
* resume interrupted chunked uploadsGravatar Joey Hess2014-07-28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Leverage the new chunked remotes to automatically resume uploads. Sort of like rsync, although of course not as efficient since this needs to start at a chunk boundry. But, unlike rsync, this method will work for S3, WebDAV, external special remotes, etc, etc. Only directory special remotes so far, but many more soon! This implementation will also allow starting an upload from one repository, interrupting it, and then resuming the upload to the same remote from an entirely different repository. Note that I added a comment that storeKey should atomically move the content into place once it's all received. This was already an undocumented requirement -- it's necessary for hasKey to work reliably. This resume code just uses hasKey to find the first chunk that's missing. Note that if there are two uploads of the same key to the same chunked remote, one might resume at the point the other had gotten to, but both will then redundantly upload. As before. In the non-resume case, this adds one hasKey call per storeKey, and only if the remote is configured to use chunks. Future work: Try to eliminate that hasKey. Notice that eg, `git annex copy --to` checks if the key is present before sending it, so is already running hasKey.. which could perhaps be cached and reused. However, this additional overhead is not very large compared with transferring an entire large file, and the ability to resume is certianly worth it. There is an optimisation in place for small files, that avoids trying to resume if the whole file fits within one chunk. This commit was sponsored by Georg Bauer.
* fix handling of removal of keys that are not presentGravatar Joey Hess2014-07-28
|
* wordingGravatar Joey Hess2014-07-26
|
* plumb creds from webapp to initremoteGravatar Joey Hess2014-02-11
| | | | | Avoids abusing setting environment variables, which was always a hack and won't work on windows.
* add GETAVAILABILITY to external special remote protocolGravatar Joey Hess2014-01-13
| | | | | And some reworking of types, and added an annex-availability git config setting.
* webapp: Improve UI around remote that have no annex.uuid set, either because ↵Gravatar Joey Hess2013-11-07
| | | | | | | | setup of them is incomplete, or because the remote git repository is not a git-annex repository. Complicated by such repositories potentially being repos that should have an annex.uuid, but it failed to be gotten, perhaps due to the past ssh repo setup bugs. This is handled now by an Upgrade Repository button.
* assistant: Support repairing git remotes that are locally accessibleGravatar Joey Hess2013-10-27
| | | | | | | | (eg, on removable drives) gcrypt remotes are not yet handled. This commit was sponsored by Sören Brunk.
* add remote fsck interfaceGravatar Joey Hess2013-10-11
| | | | | | | | | | | | | | | | | | | | Currently only implemented for local git remotes. May try to add support to git-annex-shell for ssh remotes later. Could concevably also be supported by some special remote, although that seems unlikely. Cronner user this when available, and when not falls back to fsck --fast --from remote git annex fsck --from does not itself use this interface. To do so, I would need to pass --fast and all other options that influence fsck on to the git annex fsck that it runs inside the remote. And that seems like a lot of work for a result that would be no better than cd remote; git annex fsck This may need to be revisited if git-annex-shell gets support, since it may be the case that the user cannot ssh to the server to run git-annex fsck there, but can run git-annex-shell there. This commit was sponsored by Damien Diederen.
* enabling rsync.net gcrypt reposGravatar Joey Hess2013-09-26
| | | | | Still need to detect when the user is trying to create a repo that already exists, and jump to the enabling code.
* Support hot-swapping of removable drives containing gcrypt repositories.Gravatar Joey Hess2013-09-12
| | | | | | | | | | | To support this, a core.gcrypt-id is stored by git-annex inside the git config of a local gcrypt repository, when setting it up. That is compared with the remote's cached gcrypt-id. When different, a drive has been changed. git-annex then looks up the remote config for the uuid mapped from the core.gcrypt-id, and tweaks the configuration appropriately. When there is no known config for the uuid, it will refuse to use the remote.
* partially complete gcrypt remote (local send done; rest not)Gravatar Joey Hess2013-09-07
| | | | | | | | | | | | | | | | | | | | | | | | This is a git-remote-gcrypt encrypted special remote. Only sending files in to the remote works, and only for local repositories. Most of the work so far has involved making initremote work. A particular problem is that remote setup in this case needs to generate its own uuid, derivied from the gcrypt-id. That required some larger changes in the code to support. For ssh remotes, this will probably just reuse Remote.Rsync's code, so should be easy enough. And for downloading from a web remote, I will need to factor out the part of Remote.Git that does that. One particular thing that will need work is supporting hot-swapping a local gcrypt remote. I think it needs to store the gcrypt-id in the git config of the local remote, so that it can check it every time, and compare with the cached annex-uuid for the remote. If there is a mismatch, it can change both the cached annex-uuid and the gcrypt-id. That should work, and I laid some groundwork for it by already reading the remote's config when it's local. (Also needed for other reasons.) This commit was sponsored by Daniel Callahan.
* moved AssociatedFile definitionGravatar Joey Hess2013-07-04
|
* connect existing meters to the transfer log for downloadsGravatar Joey Hess2013-04-11
| | | | | | | | | | | | | | Most remotes have meters in their implementations of retrieveKeyFile already. Simply hooking these up to the transfer log makes that information available. Easy peasy. This is particularly valuable information for encrypted remotes, which otherwise bypass the assistant's polling of temp files, and so don't have good progress bars yet. Still some work to do here (see progressbars.mdwn changes), but this is entirely an improvement from the lack of progress bars for encrypted downloads.
* webapp: Progess bar fixes for many types of special remotes.Gravatar Joey Hess2013-03-28
| | | | | | | | | | | | | There was confusion in different parts of the progress bar code about whether an update contained the total number of bytes transferred, or the number of bytes transferred since the last update. One way this bug showed up was progress bars that seemed to stick at zero for a long time. In order to fix it comprehensively, I add a new BytesProcessed data type, that is explicitly a total quantity of bytes, not a delta. Note that this doesn't necessarily fix every problem with progress bars. Particularly, buffering can now cause progress bars to seem to run ahead of transfers, reaching 100% when data is still being uploaded.