| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
| |
The --file parameter specifies the subdir in this mode.
|
|
|
|
| |
This commit was sponsored by an anonymous bitcoiner.
|
|
|
|
|
|
| |
This reverts commit bc0bf97b20c48e1d1a35d25e2e76a311c102438c.
Putting filename in the claim was a bad idea.
|
| |
|
| |
|
|
|
|
| |
external special remote that handles magnet: and *.torrent urls.
|
| |
|
| |
|
|
|
|
| |
getting the urls associated with a key.
|
|\ |
|
| |
| |
| |
| | |
could be taken to read that's the only time git-annex runs gpg, which is not the case.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This threw an unusual exception w/o an error message when probing to see if
the bucket exists yet. So rather than relying on tryS3, catch all
exceptions.
This does mean that it might get an exception for some transient network
error, think this means the bucket DNE yet, and try to create it, and then
fail when it already exists.
|
| |
| |
| |
| |
| |
| | |
This reverts commit 2ba5af49c94b97c586220c3553367988ef095934.
I misunderstood the cause of the problem.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When uploading the last part of a file, which was 640229 bytes, S3 rejected
that part: "Your proposed upload is smaller than the minimum allowed size"
I don't know what the minimum is, but the fix is just to include the last
part into the previous part. Since this can result in a part that's
double-sized, use half-sized parts normally.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Unfortunately, I don't fully understand why it was leaking using the old
method of a lazy bytestring. I just know that it was leaking, despite
neither hGetUntilMetered nor byteStringPopper seeming to leak by
themselves.
The new method avoids the lazy bytestring, and simply reads chunks from the
handle and streams them out to the http socket.
|
| | |
|
| |
| |
| |
| |
| | |
Still seems to buffer the whole partsize in memory, but I'm pretty sure my
code is not what's doing it. See https://github.com/aristidb/aws/issues/142
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
May not work; if it does this is gonna be the simplest way to get good
memory size and progress reporting.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
I assume 0.10.6 will have the fix for the bug I reported, which got fixed
in master already..
|
| | |
|
| |\
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Untested and not even compiled yet.
Testing should include checks that file content streams through without
buffering in memory.
Note that CL.consume causes all the etags to be buffered in memory.
This is probably nearly unavoidable, since a request has to be constructed
that contains the list of etags in its body. (While it might be possible to
stream generation of the body, that would entail making a http request that
dribbles out parts of the body as the multipart uploads complete, which is
not likely to work well..
To limit this being a problem, it's best for partsize to be set to some
suitably large value, like 1gb. Then a full terabyte file will need only
1024 etags to be stored, which will probably use around 1 mb of memory.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
I'm a little stuck on getting the list of etags of the parts.
This seems to require taking the md5 of each part locally,
which doesn't get along well with lazily streaming in the part from the
file. It would need to read the file twice, or lose laziness and buffer a
whole part -- but parts might be quite large.
This seems to be a problem with the API provided; S3 is supposed to return
an etag, but that is not exposed. I have filed a bug:
https://github.com/aristidb/aws/issues/141
|
| | |
|
| | |
|
| |
| |
| |
| | |
Kept support for older aws, since Debian has 0.9.2 still.
|
| |
| |
| |
| | |
Already done on s3-aws branch, so reduce divergence.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
The aws library supports the AWS4-HMAC-SHA256 that it requires.
|
| |\
| |/
|/| |
|
| |
| |
| |
| |
| |
| |
| | |
But commented out for now, because:
The authorization mechanism you have provided is not supported. Please use
AWS4-HMAC-SHA256
|
| |\
| |/
|/|
| |
| | |
Conflicts:
Remote/S3.hs
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
That and S3 are all that uses creds currently, except that external
remotes can use creds. I have not handled showing info about external
remote creds because they can have 0, 1, or more separate cred pairs, and
there's no way for info to enumerate them or know how they're used.
So it seems ok to leave out creds info for external remotes.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
This is intended to let the user easily tell if a remote's creds are
coming from info embedded in the repository, or instead from the
environment, or perhaps are locally stored in a creds file.
This commit was sponsored by Frédéric Schütz.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Now `git annex info $remote` shows info specific to the type of the remote,
for example, it shows the rsync url.
Remote types that support encryption or chunking also include that in their
info.
This commit was sponsored by Ævar Arnfjörð Bjarmason.
|