summaryrefslogtreecommitdiff
path: root/Remote
Commit message (Collapse)AuthorAge
* bittorrent: Fix locking problem when using addurl file://Gravatar Joey Hess2014-12-30
| | | | | Fixes: /home/joey/tmp/xxx/.git/annex/misctmp/torrent18347: openFile: resource busy (file is locked)
* fixed all remaining build warnings on WindowsGravatar Joey Hess2014-12-29
|
* Fix build with -f-S3.Gravatar Joey Hess2014-12-19
|
* When possible, build with the haskell torrent library for parsing torrent files.Gravatar Joey Hess2014-12-18
|
* remove default untrusted hack for bittorrentGravatar Joey Hess2014-12-17
| | | | This is better handled by checkPresent always failing.
* note about http://hackage.haskell.org/package/torrentGravatar Joey Hess2014-12-17
|
* make checkkey always fail for torrentsGravatar Joey Hess2014-12-17
| | | | See comment.
* more robust fallback when a file is available from multiple torrents and ↵Gravatar Joey Hess2014-12-17
| | | | some torrent files cannot be downloaded
* fix fencepost error and aria resume after partial download of multi-file torrentGravatar Joey Hess2014-12-17
|
* remove excess directoryGravatar Joey Hess2014-12-17
|
* fix torrentUrlNum when there is no #nGravatar Joey Hess2014-12-17
|
* move dummy uuids to Annex.UUIDGravatar Joey Hess2014-12-17
|
* add aria2 progress parsingGravatar Joey Hess2014-12-17
|
* Added bittorrent special remoteGravatar Joey Hess2014-12-16
| | | | | | | | | | addurl behavior change: When downloading an url ending in .torrent, it will download files from bittorrent, instead of the old behavior of adding the torrent file to the repository. Added Recommends on aria2 and bittornado | bittorrent. This commit was sponsored by Asbjørn Sloth Tønnesen.
* reformatGravatar Joey Hess2014-12-16
|
* sanitize filepaths provided by checkUrlGravatar Joey Hess2014-12-11
|
* simplify external special remote implementationGravatar Joey Hess2014-12-11
|
* use subdir for addurl when it creates multiple filesGravatar Joey Hess2014-12-11
| | | | The --file parameter specifies the subdir in this mode.
* Expand checkurl to support recommended filename, and multi-file-urlsGravatar Joey Hess2014-12-11
| | | | This commit was sponsored by an anonymous bitcoiner.
* Revert "let url claims optionally include a suggested filename"Gravatar Joey Hess2014-12-11
| | | | | | This reverts commit bc0bf97b20c48e1d1a35d25e2e76a311c102438c. Putting filename in the claim was a bad idea.
* let url claims optionally include a suggested filenameGravatar Joey Hess2014-12-11
|
* unmangled mangled urls from the log before passing to external special remoteGravatar Joey Hess2014-12-08
|
* Urls can now be claimed by remotes. This will allow creating, for example, a ↵Gravatar Joey Hess2014-12-08
| | | | external special remote that handles magnet: and *.torrent urls.
* implement CLAIMURL for external special remoteGravatar Joey Hess2014-12-08
|
* add stub claimUrlGravatar Joey Hess2014-12-08
|
* External special remote protocol now includes commands for setting and ↵Gravatar Joey Hess2014-12-08
| | | | getting the urls associated with a key.
* Merge branch 's3-aws'Gravatar Joey Hess2014-12-03
|\
* | Don't show "(gpg)" when decrypting the remote encryption cipher, since this ↵Gravatar Joey Hess2014-12-02
| | | | | | | | could be taken to read that's the only time git-annex runs gpg, which is not the case.
| * support S3 front-end used by globalways.netGravatar Joey Hess2014-11-05
| | | | | | | | | | | | | | | | | | | | This threw an unusual exception w/o an error message when probing to see if the bucket exists yet. So rather than relying on tryS3, catch all exceptions. This does mean that it might get an exception for some transient network error, think this means the bucket DNE yet, and try to create it, and then fail when it already exists.
| * Revert "work around minimum part size problem"Gravatar Joey Hess2014-11-04
| | | | | | | | | | | | This reverts commit 2ba5af49c94b97c586220c3553367988ef095934. I misunderstood the cause of the problem.
| * work around minimum part size problemGravatar Joey Hess2014-11-04
| | | | | | | | | | | | | | | | | | When uploading the last part of a file, which was 640229 bytes, S3 rejected that part: "Your proposed upload is smaller than the minimum allowed size" I don't know what the minimum is, but the fix is just to include the last part into the previous part. Since this can result in a part that's double-sized, use half-sized parts normally.
| * fix a couple type errors and the progress barGravatar Joey Hess2014-11-04
| |
| * fix memory leakGravatar Joey Hess2014-11-04
| | | | | | | | | | | | | | | | | | | | Unfortunately, I don't fully understand why it was leaking using the old method of a lazy bytestring. I just know that it was leaking, despite neither hGetUntilMetered nor byteStringPopper seeming to leak by themselves. The new method avoids the lazy bytestring, and simply reads chunks from the handle and streams them out to the http socket.
| * combine 2 checksGravatar Joey Hess2014-11-04
| |
| * casts; now fully working.. but still leakingGravatar Joey Hess2014-11-03
| | | | | | | | | | Still seems to buffer the whole partsize in memory, but I'm pretty sure my code is not what's doing it. See https://github.com/aristidb/aws/issues/142
| * this should avoid leaking memoryGravatar Joey Hess2014-11-03
| |
| * logic errorGravatar Joey Hess2014-11-03
| |
| * WIP 3Gravatar Joey Hess2014-11-03
| |
| * WIP 2Gravatar Joey Hess2014-11-03
| |
| * WIP try sending using RequestBodyStreamChunkedGravatar Joey Hess2014-11-03
| | | | | | | | | | May not work; if it does this is gonna be the simplest way to get good memory size and progress reporting.
| * link to memory leak bugGravatar Joey Hess2014-11-03
| |
| * improve info display for multipartGravatar Joey Hess2014-11-03
| |
| * fix buildGravatar Joey Hess2014-11-03
| |
| * adjust version checkGravatar Joey Hess2014-11-03
| | | | | | | | | | I assume 0.10.6 will have the fix for the bug I reported, which got fixed in master already..
| * show multipart configuration in git annex info s3remoteGravatar Joey Hess2014-11-03
| |
| * Merge branch 'master' into s3-aws-multipartGravatar Joey Hess2014-11-03
| |\ | |/ |/|
| * finish multipart support using unreleased update to aws lib to yield etagsGravatar Joey Hess2014-11-03
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Untested and not even compiled yet. Testing should include checks that file content streams through without buffering in memory. Note that CL.consume causes all the etags to be buffered in memory. This is probably nearly unavoidable, since a request has to be constructed that contains the list of etags in its body. (While it might be possible to stream generation of the body, that would entail making a http request that dribbles out parts of the body as the multipart uploads complete, which is not likely to work well.. To limit this being a problem, it's best for partsize to be set to some suitably large value, like 1gb. Then a full terabyte file will need only 1024 etags to be stored, which will probably use around 1 mb of memory.
* | improve uuid mismatch messageGravatar Joey Hess2014-10-28
| |
| * WIP multipart S3 uploadGravatar Joey Hess2014-10-28
| | | | | | | | | | | | | | | | | | | | | | | | I'm a little stuck on getting the list of etags of the parts. This seems to require taking the md5 of each part locally, which doesn't get along well with lazily streaming in the part from the file. It would need to read the file twice, or lose laziness and buffer a whole part -- but parts might be quite large. This seems to be a problem with the API provided; S3 is supposed to return an etag, but that is not exposed. I have filed a bug: https://github.com/aristidb/aws/issues/141
| * fix buildGravatar Joey Hess2014-10-23
| |