From ec3460a5f9e0e2d5c80d2e1a767bab9b34f8d30a Mon Sep 17 00:00:00 2001 From: "http://yarikoptic.myopenid.com/" Date: Wed, 22 May 2013 18:48:59 +0000 Subject: Added a comment: compression -- storage and transfer --- .../comment_16_b9d238fb15ad7628e33c90b071e07bb0._comment | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 doc/special_remotes/comment_16_b9d238fb15ad7628e33c90b071e07bb0._comment (limited to 'doc/special_remotes') diff --git a/doc/special_remotes/comment_16_b9d238fb15ad7628e33c90b071e07bb0._comment b/doc/special_remotes/comment_16_b9d238fb15ad7628e33c90b071e07bb0._comment new file mode 100644 index 000000000..8b1fcd831 --- /dev/null +++ b/doc/special_remotes/comment_16_b9d238fb15ad7628e33c90b071e07bb0._comment @@ -0,0 +1,12 @@ +[[!comment format=mdwn + username="http://yarikoptic.myopenid.com/" + nickname="site-myopenid" + subject="compression -- storage and transfer" + date="2013-05-22T18:48:59Z" + content=""" +Is there any remote which would not only compress during transfer (I believe rsync does that, right?) but also store objects compressed? + +I thought bup would do both -- but it seems that git annex receives data uncompressed from a bup remote, and bup remote requires ssh access. + +In my case I want to make publicly available files which are binary blobs which could be compressed very well. It would be a pity if I waste storage on my end and also incur significant traffic, which could be avoided if data load was transferred compressed. May be HTTP compression (http://en.wikipedia.org/wiki/HTTP_compression) could somehow be used efficiently for this purpose (not sure if load then originally could already reside in a compressed form to avoid server time to re-compress it)? +"""]] -- cgit v1.2.3