| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This is better handled by checkPresent always failing.
|
|
|
|
|
|
| |
Doesn't really seem worth making it work; addurl --fast can be used to
get a tree of files in the torrent and then the user can rm the ones they
don't want.
|
|
|
|
|
|
|
|
|
|
| |
addurl behavior change: When downloading an url ending in .torrent,
it will download files from bittorrent, instead of the old behavior
of adding the torrent file to the repository.
Added Recommends on aria2 and bittornado | bittorrent.
This commit was sponsored by Asbjørn Sloth Tønnesen.
|
| |
|
| |
|
|
|
|
| |
Not IMHO good enough quality to be more than an example, but it does work!
|
| |
|
| |
|
|\ |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
The minimum allowsed size actually refers to the part size!
|
| |
| |
| |
| |
| |
| | |
This reverts commit 2ba5af49c94b97c586220c3553367988ef095934.
I misunderstood the cause of the problem.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When uploading the last part of a file, which was 640229 bytes, S3 rejected
that part: "Your proposed upload is smaller than the minimum allowed size"
I don't know what the minimum is, but the fix is just to include the last
part into the previous part. Since this can result in a part that's
double-sized, use half-sized parts normally.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I'm a little stuck on getting the list of etags of the parts.
This seems to require taking the md5 of each part locally,
which doesn't get along well with lazily streaming in the part from the
file. It would need to read the file twice, or lose laziness and buffer a
whole part -- but parts might be quite large.
This seems to be a problem with the API provided; S3 is supposed to return
an etag, but that is not exposed. I have filed a bug:
https://github.com/aristidb/aws/issues/141
|
| |\
| |/
|/|
| |
| | |
Conflicts:
Remote/S3.hs
|
| | |
|
| |\
| |/
|/|
| |
| | |
Conflicts:
git-annex.cabal
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently, initremote works, but not the other operations. They should be
fairly easy to add from this base.
Also, https://github.com/aristidb/aws/issues/119 blocks internet archive
support.
Note that since http-conduit is used, this also adds https support to S3.
Although git-annex encrypts everything anyway, so that may not be extremely
useful. It is not enabled by default, because existing S3 special remotes
have port=80 in their config. Setting port=443 will enable it.
This commit was sponsored by Daniel Brockman.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
support
Reusing http connection when operating on chunks is not done yet,
I had to submit some patches to DAV to support that. However, this is no
slower than old-style chunking was.
Note that it's a fileRetriever and a fileStorer, despite DAV using
bytestrings that would allow streaming. As a result, upload/download of
encrypted files is made a bit more expensive, since it spools them to temp
files. This was needed to get the progress meters to work.
There are probably ways to avoid that.. But it turns out that the current
DAV interface buffers the whole file content in memory, and I have
sent in a patch to DAV to improve its interfaces. Using the new interfaces,
it's certainly going to need to be a fileStorer, in order to read the file
size from the file (getting the size of a bytestring would destroy
laziness). It should be possible to use the new interface to make it be a
byteRetriever, so I'll change that when I get to it.
This commit was sponsored by Andreas Olsson.
|
|
|
|
| |
Some reorg of Remote.Rsync code to export the things gcrypt needs.
|
|
|
|
|
| |
Chunking does not speed up rsync at all, so it's only useful for
interop with the directory special remote.
|
|\ |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
The assistant defaults to 1MiB chunk size for new S3 special remotes.
Which will work around a couple of bugs:
http://git-annex.branchable.com/bugs/S3_memory_leaks/
http://git-annex.branchable.com/bugs/S3_upload_not_using_multipart/
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Suppose A is 10 mb, and B is 20 mb, and the upload speed is the same. If B
starts first, when A will overwrite the file it is uploading for the 1st chunk.
Then A uploads the second chunk, and once A is done, B finishes the 1st chunk
and uploads its second. We now have 1(from A), 2(from B), so data loss.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
avoid unncessary passphrase prompts.
This is a security/usability tradeoff. To avoid exposing the gpg key ids
who can decrypt the repository, users can unset
gcrypt-publish-participants.
The gcrypt-publish-participants option is available in my fork of
git-remote-gcrypt.
This commit was sponsored by Christopher Kernahan.
|
| |
|
| |
|
| |
|
| |
|
| |
|