aboutsummaryrefslogtreecommitdiff
path: root/doc/special_remotes
Commit message (Collapse)AuthorAge
* Merge branch 'master' of ssh://git-annex.branchable.comGravatar Joey Hess2015-06-05
|\
* | S3: Publically accessible buckets can be used without creds.Gravatar Joey Hess2015-06-05
| |
* | public=yes config to send AclPublicReadGravatar Joey Hess2015-06-05
| | | | | | | | | | | | | | | | | | | | | | In my tests, this has to be set when uploading a file to the bucket and then the file can be accessed using the bucketname.s3.amazonaws.com url. Setting it when creating the bucket didn't seem to make the whole bucket public, or allow accessing files stored in it. But I have gone ahead and also sent it when creating the bucket just in case that is needed in some case.
| * Added a comment: Tahoe-LAFS helper: multiple FURLs for the same gridGravatar junk@5e3eeba2290e8a3fcf938d9f93b0dfa2565dc7b12015-06-05
|/
* commentGravatar Joey Hess2015-06-02
|
* doc/*.mdwn: Various typo fixesGravatar Øyvind A. Holm2015-05-30
|
* followupGravatar Joey Hess2015-05-27
|
* Added a comment: Finding IPFS hashGravatar rob.syme@92895c98b16fd7a88bed5f10913c522ebfd76c312015-05-26
|
* Added a comment: Finding IPFS hashGravatar rob.syme@92895c98b16fd7a88bed5f10913c522ebfd76c312015-05-26
|
* Added a comment: Tahoe-LAFS helper: multiple FURLs for the same gridGravatar junk@5e3eeba2290e8a3fcf938d9f93b0dfa2565dc7b12015-05-24
|
* remove too much timeGravatar anarcat2015-05-20
|
* Added a comment: Amazon Cloud Drive supportGravatar https://www.google.com/accounts/o8/id?id=AItOawnWvnTWY6LrcPB4BzYEBn5mRTpNhg5EtEg2015-03-28
|
* Added a comment: about copying to the local storeGravatar https://id.koumbit.net/anarcat2015-03-07
|
* fix whereis output exampleGravatar Joey Hess2015-03-05
|
* Added SETURIPRESENT and SETURIMISSING to external special remote protocolGravatar Joey Hess2015-03-05
| | | | | | | | | Useful for things like ipfs that don't use regular urls. An external special remote can add a regular url to a key, and then git-annex get will download it from the web. But for ipfs, we want to instead tell git-annex that the uri uses OtherDownloader. Before this change, the external special remote protocol lacked a way to do that.
* fix headingGravatar Joey Hess2015-03-05
|
* experimental ipfs special remote, with addurl supportGravatar Joey Hess2015-03-05
|
* Added a comment: S3 file/folder namesGravatar https://www.google.com/accounts/o8/id?id=AItOawlc-3pdibcizrdz4WmZooECL0k6AvM1cWc2015-02-19
|
* fix linkGravatar https://id.koumbit.net/anarcat2015-02-16
|
* (no commit message)Gravatar https://id.koumbit.net/anarcat2015-02-13
|
* note when this was introduced, it confused me to not see this in jessie!Gravatar https://id.koumbit.net/anarcat2015-02-10
|
* add link to google cloud storage tipGravatar Joey Hess2015-02-04
|
* update my email address and homepage urlGravatar Joey Hess2015-01-21
|
* remove obsolete note; s3-aws was merged alreadyGravatar Joey Hess2015-01-05
|
* note about trust and checking copiesGravatar Joey Hess2014-12-18
|
* When possible, build with the haskell torrent library for parsing torrent files.Gravatar Joey Hess2014-12-18
|
* make checkKey always return unknownGravatar Joey Hess2014-12-17
|
* remove default untrusted hack for bittorrentGravatar Joey Hess2014-12-17
| | | | This is better handled by checkPresent always failing.
* addurl with #n doesn't work, remove from docsGravatar Joey Hess2014-12-17
| | | | | | Doesn't really seem worth making it work; addurl --fast can be used to get a tree of files in the torrent and then the user can rm the ones they don't want.
* Added bittorrent special remoteGravatar Joey Hess2014-12-16
| | | | | | | | | | addurl behavior change: When downloading an url ending in .torrent, it will download files from bittorrent, instead of the old behavior of adding the torrent file to the repository. Added Recommends on aria2 and bittornado | bittorrent. This commit was sponsored by Asbjørn Sloth Tønnesen.
* fix support for single-file torrentsGravatar Joey Hess2014-12-11
|
* move error message to return valueGravatar Joey Hess2014-12-11
|
* add working external special remote for torrentsGravatar Joey Hess2014-12-11
| | | | Not IMHO good enough quality to be more than an example, but it does work!
* (no commit message)Gravatar https://www.google.com/accounts/o8/id?id=AItOawnWvnTWY6LrcPB4BzYEBn5mRTpNhg5EtEg2014-12-08
|
* Added a comment: bup-join local-arch/2014-12-03-235617Gravatar https://www.google.com/accounts/o8/id?id=AItOawkEYZEqLf3Aj_FGV7S0FvsMplmGqqb555M2014-12-04
|
* Merge branch 's3-aws'Gravatar Joey Hess2014-12-03
|\
* | Added a comment: android clientGravatar https://www.google.com/accounts/o8/id?id=AItOawn6NYODTE1Sy9YZoi2pvb6i-lcq3dYBxZI2014-11-17
| |
| * reorderGravatar Joey Hess2014-11-06
| |
* | Added a commentGravatar https://olivier.mehani.name/2014-11-05
| |
| * better partsize docsGravatar Joey Hess2014-11-04
| | | | | | | | The minimum allowsed size actually refers to the part size!
| * Revert "work around minimum part size problem"Gravatar Joey Hess2014-11-04
| | | | | | | | | | | | This reverts commit 2ba5af49c94b97c586220c3553367988ef095934. I misunderstood the cause of the problem.
| * work around minimum part size problemGravatar Joey Hess2014-11-04
| | | | | | | | | | | | | | | | | | When uploading the last part of a file, which was 640229 bytes, S3 rejected that part: "Your proposed upload is smaller than the minimum allowed size" I don't know what the minimum is, but the fix is just to include the last part into the previous part. Since this can result in a part that's double-sized, use half-sized parts normally.
| * WIP multipart S3 uploadGravatar Joey Hess2014-10-28
| | | | | | | | | | | | | | | | | | | | | | | | I'm a little stuck on getting the list of etags of the parts. This seems to require taking the md5 of each part locally, which doesn't get along well with lazily streaming in the part from the file. It would need to read the file twice, or lose laziness and buffer a whole part -- but parts might be quite large. This seems to be a problem with the API provided; S3 is supposed to return an etag, but that is not exposed. I have filed a bug: https://github.com/aristidb/aws/issues/141
| * Merge branch 'master' into s3-awsGravatar Joey Hess2014-10-22
| |\ | |/ |/| | | | | Conflicts: Remote/S3.hs
* | add encryption setting to examplesGravatar Joey Hess2014-10-02
| |
| * Merge branch 'master' into s3-awsGravatar Joey Hess2014-08-15
| |\ | |/ |/| | | | | Conflicts: git-annex.cabal
* | clarify config option nameGravatar Joey Hess2014-08-15
| |
| * pass metadata headers and storage class to S3 when putting objectsGravatar Joey Hess2014-08-09
| |
| * WIP converting S3 special remote from hS3 to aws libraryGravatar Joey Hess2014-08-08
|/ | | | | | | | | | | | | | | Currently, initremote works, but not the other operations. They should be fairly easy to add from this base. Also, https://github.com/aristidb/aws/issues/119 blocks internet archive support. Note that since http-conduit is used, this also adds https support to S3. Although git-annex encrypts everything anyway, so that may not be extremely useful. It is not enabled by default, because existing S3 special remotes have port=80 in their config. Setting port=443 will enable it. This commit was sponsored by Daniel Brockman.
* convert WebDAV to new special remote interface, adding new-style chunking ↵Gravatar Joey Hess2014-08-06
| | | | | | | | | | | | | | | | | | | | | | | support Reusing http connection when operating on chunks is not done yet, I had to submit some patches to DAV to support that. However, this is no slower than old-style chunking was. Note that it's a fileRetriever and a fileStorer, despite DAV using bytestrings that would allow streaming. As a result, upload/download of encrypted files is made a bit more expensive, since it spools them to temp files. This was needed to get the progress meters to work. There are probably ways to avoid that.. But it turns out that the current DAV interface buffers the whole file content in memory, and I have sent in a patch to DAV to improve its interfaces. Using the new interfaces, it's certainly going to need to be a fileStorer, in order to read the file size from the file (getting the size of a bytestring would destroy laziness). It should be possible to use the new interface to make it be a byteRetriever, so I'll change that when I get to it. This commit was sponsored by Andreas Olsson.