| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I tend to prefer moving toward explicit exception handling, not away from
it, but in this case, I think there are good reasons to let checkPresent
throw exceptions:
1. They can all be caught in one place (Remote.hasKey), and we know
every possible exception is caught there now, which we didn't before.
2. It simplified the code of the Remotes. I think it makes sense for
Remotes to be able to be implemented without needing to worry about
catching exceptions inside them. (Mostly.)
3. Types.StoreRetrieve.Preparer can only work on things that return a
Bool, which all the other relevant remote methods already did.
I do not see a good way to generalize that type; my previous attempts
failed miserably.
|
| |
|
|
|
|
| |
Reuse Remote.Directory's code.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now git-annex-shell recvkey, when the key is already present, allows
another copy to be rsynced up, and just throws it away.
This same behavior could have already happened before, when eg, two repos
tried to upload the same object at the same time. So this makes the test
suite pass, and should not add any bad behavior, other than slightly more
work being done in a rather edge case.
This relies on moveAnnex's behavior of keeping the current version of an
object.
|
|
|
|
|
|
| |
Generalized code from Remote.Directory and reused it.
Test suite now passes for local gcrypt repos.
|
|
|
|
|
|
| |
This involved making Remote.Gcrypt.gen expect a Repo with a regular,
non-gcrypt path. Since tht is what's stored as the Remote's gitrepo,
testremote can then modify it and feed it back into gen.
|
| |
|
|
|
|
| |
Same as is done by rsync, and for regular git repos.
|
|
|
|
|
|
| |
When files are stored using rsync, they have their write bit removed;
so does the directory they're put in. The local repo code did not turn
these bits back on, so failed to remove.
|
| |
|
|
|
|
|
| |
The leak was caused by the thread that sshd'd to send transferinfo
not waiting on its ssh. Doh.
|
|\ |
|
| | |
|
| |
| |
| |
| | |
Some reorg of Remote.Rsync code to export the things gcrypt needs.
|
| |
| |
| |
| |
| | |
This breaks gcrypt, which relies on some internals of the rsync remote.
To fix next..
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This reaping of any processes came to cause me problems when redoing the
rsync special remote -- a gpg process that was running gets waited on and
the place that then checks its return code fails.
I cannot reproduce any zombies when using the rsync special remote.
But I still can when using a normal git remote, accessed over ssh.
There is 1 zombie per file downloaded without this horrible hack enabled.
So, move the hack to only be used in that case.
|
| |
| |
| |
| | |
specialRemote handles all meter display, so this is redundant.
|
| |
| |
| |
| | |
Allow disabling progress displays, for eg, rsync.
|
| |
| |
| |
| |
| | |
Chunking does not speed up rsync at all, so it's only useful for
interop with the directory special remote.
|
|\| |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Make the byteRetriever be passed the callback that consumes the bytestring.
This way, there's no worries about the lazy bytestring not all being read
when the resource that's creating it is closed.
Which in turn lets bup, ddar, and S3 each switch from using an unncessary
fileRetriver to a byteRetriever. So, more efficient on chunks and encrypted
files.
The only remaining fileRetrievers are hook and external, which really do
retrieve to files.
|
| | |
|
| |\ |
|
| | | |
|
| |/ |
|
| |\ |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Since ddar de-deuplicates, I assume there is no benefit from chunking.
This has not been tested!
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
bup already splits files and does rolling deltas, so there is no reason to
use chunking here.
The new API made it easier to add progress support for storeKey, so that's
done. Unfortunately, bup-split still outputs its own progress with -q,
so a little ugly, but not too bad.
Made dropping remove the branch for an object, for two reasons:
1. The new API calls removeKey to roll back a storeKey when the content
changed unexpectedly.
2. So that testremote will be happy.
Also, fixed a bug that caused a crash when removing the branch for an
object in rollback.
|
|\| | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Chunking would complicate the assistant's code that checks when a pending
retrieval of a key from glacier is done. It would perhaps be nice to
support it to allow resuming, but not right now.
Converting to the new API still simplifies the code.
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The assistant defaults to 1MiB chunk size for new S3 special remotes.
Which will work around a couple of bugs:
http://git-annex.branchable.com/bugs/S3_memory_leaks/
http://git-annex.branchable.com/bugs/S3_upload_not_using_multipart/
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The forall a. in Preparer made resourcePrepare not seem to be usable, so
I specialized a to Bool. Which works for both Preparer Storer and
Preparer Retriever, but wouldn't let the Preparer be used for hasKey
as it currently stands.
|
| | | |
|