| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implemented the Retriever.
Unfortunately, it is a fileRetriever and not a byteRetriever.
It should be possible to convert this to a byteRetiever, but I got stuck:
The conduit sink needs to process individual chunks, but a byteRetriever
needs to pass a single L.ByteString to its callback for processing. I
looked into using unsafeInerlaveIO to build up the bytestring lazily,
but the sink is already operating under conduit's inversion of control,
and does not run directly in IO anyway.
On the plus side, no more memory leak..
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Fixes the memory leak on store.. the second oldest open git-annex bug!
Only retrieve remains to be converted.
This commit was sponsored by Scott Robinson.
|
|\ |
|
| |
| |
| |
| |
| | |
The responseheaders can sometimes include the entire input request,
which is several pages of garbage.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
guardUsable r (error "foo") *returned* an error, rather than throwing it
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, initremote works, but not the other operations. They should be
fairly easy to add from this base.
Also, https://github.com/aristidb/aws/issues/119 blocks internet archive
support.
Note that since http-conduit is used, this also adds https support to S3.
Although git-annex encrypts everything anyway, so that may not be extremely
useful. It is not enabled by default, because existing S3 special remotes
have port=80 in their config. Setting port=443 will enable it.
This commit was sponsored by Daniel Brockman.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
httpBodyRetriever will later also be used by S3
This commit was sponsored by Ethan Aubin.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed old extensible-exceptions, only needed for very old ghc.
Made webdav use Utility.Exception, to work after some changes in DAV's
exception handling.
Removed Annex.Exception. Mostly this was trivial, but note that
tryAnnex is replaced with tryNonAsync and catchAnnex replaced with
catchNonAsync. In theory that could be a behavior change, since the former
caught all exceptions, and the latter don't catch async exceptions.
However, in practice, nothing in the Annex monad uses async exceptions.
Grepping for throwTo and killThread only find stuff in the assistant,
which does not seem related.
Command.Add.undo is changed to accept a SomeException, and things
that use it for rollback now catch non-async exceptions, rather than
only IOExceptions.
|
|
|
|
|
|
| |
The httpStorer will later also be used by S3.
This commit was sponsored by Torbjørn Thorsen.
|
|
|
|
|
|
|
|
| |
For both new and legacy chunks.
Massive speed up!
This commit was sponsored by Dominik Wagenknecht.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This speeds up the webdav special remote somewhat, since it often now
groups actions together in a single http connection when eg, storing a
file.
Legacy chunks are still supported, but have not been sped up.
This depends on a as-yet unreleased version of DAV.
This commit was sponsored by Thomas Hochstein.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
support
Reusing http connection when operating on chunks is not done yet,
I had to submit some patches to DAV to support that. However, this is no
slower than old-style chunking was.
Note that it's a fileRetriever and a fileStorer, despite DAV using
bytestrings that would allow streaming. As a result, upload/download of
encrypted files is made a bit more expensive, since it spools them to temp
files. This was needed to get the progress meters to work.
There are probably ways to avoid that.. But it turns out that the current
DAV interface buffers the whole file content in memory, and I have
sent in a patch to DAV to improve its interfaces. Using the new interfaces,
it's certainly going to need to be a fileStorer, in order to read the file
size from the file (getting the size of a bytestring would destroy
laziness). It should be possible to use the new interface to make it be a
byteRetriever, so I'll change that when I get to it.
This commit was sponsored by Andreas Olsson.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This will allow special remotes to eg, open a http connection and reuse it,
while checking if chunks are present, or removing chunks.
S3 and WebDAV both need this to support chunks with reasonable speed.
Note that a special remote might want to cache a http connection across
multiple requests. A simple case of this is that CheckPresent is typically
called before Store or Remove. A remote using this interface can certianly
use a Preparer that eg, uses a MVar to cache a http connection.
However, it's up to the remote to then deal with things like stale or
stalled http connections when eg, doing a series of downloads from a remote
and other places. There could be long delays between calls to a remote,
which could lead to eg, http connection stalls; the machine might even
move to a new network, etc.
It might be nice to improve this interface later to allow
the simple case without needing to handle the full complex case.
One way to do it would be to have a `Transaction SpecialRemote cache`,
where SpecialRemote contains methods for Storer, Retriever, Remover, and
CheckPresent, that all expect to be passed a `cache`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I tend to prefer moving toward explicit exception handling, not away from
it, but in this case, I think there are good reasons to let checkPresent
throw exceptions:
1. They can all be caught in one place (Remote.hasKey), and we know
every possible exception is caught there now, which we didn't before.
2. It simplified the code of the Remotes. I think it makes sense for
Remotes to be able to be implemented without needing to worry about
catching exceptions inside them. (Mostly.)
3. Types.StoreRetrieve.Preparer can only work on things that return a
Bool, which all the other relevant remote methods already did.
I do not see a good way to generalize that type; my previous attempts
failed miserably.
|
|
|
|
| |
Reuse Remote.Directory's code.
|
|
|
|
|
|
| |
Generalized code from Remote.Directory and reused it.
Test suite now passes for local gcrypt repos.
|
|
|
|
|
|
| |
This involved making Remote.Gcrypt.gen expect a Repo with a regular,
non-gcrypt path. Since tht is what's stored as the Remote's gitrepo,
testremote can then modify it and feed it back into gen.
|
|
|
|
| |
Same as is done by rsync, and for regular git repos.
|
|
|
|
|
|
| |
When files are stored using rsync, they have their write bit removed;
so does the directory they're put in. The local repo code did not turn
these bits back on, so failed to remove.
|
| |
|
|
|
|
|
| |
The leak was caused by the thread that sshd'd to send transferinfo
not waiting on its ssh. Doh.
|
|
|
|
| |
Some reorg of Remote.Rsync code to export the things gcrypt needs.
|
|
|
|
|
| |
This breaks gcrypt, which relies on some internals of the rsync remote.
To fix next..
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reaping of any processes came to cause me problems when redoing the
rsync special remote -- a gpg process that was running gets waited on and
the place that then checks its return code fails.
I cannot reproduce any zombies when using the rsync special remote.
But I still can when using a normal git remote, accessed over ssh.
There is 1 zombie per file downloaded without this horrible hack enabled.
So, move the hack to only be used in that case.
|
|
|
|
| |
specialRemote handles all meter display, so this is redundant.
|
|
|
|
| |
Allow disabling progress displays, for eg, rsync.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make the byteRetriever be passed the callback that consumes the bytestring.
This way, there's no worries about the lazy bytestring not all being read
when the resource that's creating it is closed.
Which in turn lets bup, ddar, and S3 each switch from using an unncessary
fileRetriver to a byteRetriever. So, more efficient on chunks and encrypted
files.
The only remaining fileRetrievers are hook and external, which really do
retrieve to files.
|
|
|
|
|
|
| |
Since ddar de-deuplicates, I assume there is no benefit from chunking.
This has not been tested!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bup already splits files and does rolling deltas, so there is no reason to
use chunking here.
The new API made it easier to add progress support for storeKey, so that's
done. Unfortunately, bup-split still outputs its own progress with -q,
so a little ugly, but not too bad.
Made dropping remove the branch for an object, for two reasons:
1. The new API calls removeKey to roll back a storeKey when the content
changed unexpectedly.
2. So that testremote will be happy.
Also, fixed a bug that caused a crash when removing the branch for an
object in rollback.
|
| |
|
|
|
|
|
|
|
|
| |
Chunking would complicate the assistant's code that checks when a pending
retrieval of a key from glacier is done. It would perhaps be nice to
support it to allow resuming, but not right now.
Converting to the new API still simplifies the code.
|
|
|
|
|
|
|
| |
The assistant defaults to 1MiB chunk size for new S3 special remotes.
Which will work around a couple of bugs:
http://git-annex.branchable.com/bugs/S3_memory_leaks/
http://git-annex.branchable.com/bugs/S3_upload_not_using_multipart/
|
|
|
|
|
|
|
| |
The forall a. in Preparer made resourcePrepare not seem to be usable, so
I specialized a to Bool. Which works for both Preparer Storer and
Preparer Retriever, but wouldn't let the Preparer be used for hasKey
as it currently stands.
|
| |
|
|
|
|
|
|
|
| |
And fixed a bug found by these tests; retrieveKeyFile would fail
when the dest file was already complete.
This commit was sponsored by Bradley Unterrheiner.
|
|
|
|
| |
Discovered thanks to testremote command!
|
|
|
|
| |
Found by testremote
|
|
|
|
|
| |
0.6.1 is in testing, and stable does not have DAV at all, so I can dispense
with this compatability code
|
|
|
|
|
|
| |
The content of unstable keys can potentially be different in different
repos, so eg, resuming a chunked upload started by another repo would
corrupt data.
|
|
|
|
|
|
| |
This way, when the remote implementation neglects to update progress,
there will still be a somewhat useful progress display, as long as chunks
are used.
|