| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid using fileSize which maxes out at just 2 gb on Windows.
Instead, use hFileSize, which doesn't have a bounded size.
Fixes support for files > 2 gb on Windows.
Note that the InodeCache code only needs to compare a file size,
so it doesn't matter it the file size wraps. So it has been
left as-is. This was necessary both to avoid invalidating existing inode
caches, and because the code passed FileStatus around and would have become
more expensive if it called getFileSize.
This commit was sponsored by Christian Dietrich.
|
|
|
|
|
|
|
|
| |
* info: Can now display info about a given uuid.
* Added to remote/uuid info: Count of the number of keys present
on the remote, and their size. This is rather expensive to calculate,
so comes last and --fast will disable it.
* Git remote info now includes the date of the last sync with the remote.
|
|
|
|
|
|
|
|
| |
Reverts 2bba5bc22d049272d3328bfa6c452d3e2e50e86c
Unfortunately, this caused breakage on Windows, and possibly elsewhere,
because parentDir and takeDirectory do not behave the same when there is a
trailing directory separator.
|
|
|
|
|
|
|
|
| |
parentDir is less safe than takeDirectory, especially when working
with relative FilePaths. It's really only useful in loops that
want to terminate at /
This commit was sponsored by Audric SCHILTKNECHT.
|
|
|
|
|
| |
Fixes:
/home/joey/tmp/xxx/.git/annex/misctmp/torrent18347: openFile: resource busy (file is locked)
|
| |
|
| |
|
| |
|
|
|
|
| |
This is better handled by checkPresent always failing.
|
| |
|
|
|
|
| |
See comment.
|
|
|
|
| |
some torrent files cannot be downloaded
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
addurl behavior change: When downloading an url ending in .torrent,
it will download files from bittorrent, instead of the old behavior
of adding the torrent file to the repository.
Added Recommends on aria2 and bittornado | bittorrent.
This commit was sponsored by Asbjørn Sloth Tønnesen.
|
| |
|
| |
|
| |
|
|
|
|
| |
The --file parameter specifies the subdir in this mode.
|
|
|
|
| |
This commit was sponsored by an anonymous bitcoiner.
|
|
|
|
|
|
| |
This reverts commit bc0bf97b20c48e1d1a35d25e2e76a311c102438c.
Putting filename in the claim was a bad idea.
|
| |
|
| |
|
|
|
|
| |
external special remote that handles magnet: and *.torrent urls.
|
| |
|
| |
|
|
|
|
| |
getting the urls associated with a key.
|
|\ |
|
| |
| |
| |
| | |
could be taken to read that's the only time git-annex runs gpg, which is not the case.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This threw an unusual exception w/o an error message when probing to see if
the bucket exists yet. So rather than relying on tryS3, catch all
exceptions.
This does mean that it might get an exception for some transient network
error, think this means the bucket DNE yet, and try to create it, and then
fail when it already exists.
|
| |
| |
| |
| |
| |
| | |
This reverts commit 2ba5af49c94b97c586220c3553367988ef095934.
I misunderstood the cause of the problem.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When uploading the last part of a file, which was 640229 bytes, S3 rejected
that part: "Your proposed upload is smaller than the minimum allowed size"
I don't know what the minimum is, but the fix is just to include the last
part into the previous part. Since this can result in a part that's
double-sized, use half-sized parts normally.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Unfortunately, I don't fully understand why it was leaking using the old
method of a lazy bytestring. I just know that it was leaking, despite
neither hGetUntilMetered nor byteStringPopper seeming to leak by
themselves.
The new method avoids the lazy bytestring, and simply reads chunks from the
handle and streams them out to the http socket.
|
| | |
|
| |
| |
| |
| |
| | |
Still seems to buffer the whole partsize in memory, but I'm pretty sure my
code is not what's doing it. See https://github.com/aristidb/aws/issues/142
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
May not work; if it does this is gonna be the simplest way to get good
memory size and progress reporting.
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
I assume 0.10.6 will have the fix for the bug I reported, which got fixed
in master already..
|