From 544d6d840a20180e87a53b3a175558a87a8b8ec2 Mon Sep 17 00:00:00 2001 From: "interfect@b151490178830f44348aa57b77ad58c7d18e8fe7" Date: Wed, 21 Sep 2016 22:49:56 +0000 Subject: Added a comment --- .../comment_5_2b49a293b044d0c2fcfe1701c76424c4._comment | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 doc/todo/Allow_globally_limiting_filename_length/comment_5_2b49a293b044d0c2fcfe1701c76424c4._comment diff --git a/doc/todo/Allow_globally_limiting_filename_length/comment_5_2b49a293b044d0c2fcfe1701c76424c4._comment b/doc/todo/Allow_globally_limiting_filename_length/comment_5_2b49a293b044d0c2fcfe1701c76424c4._comment new file mode 100644 index 000000000..8c4c6f09a --- /dev/null +++ b/doc/todo/Allow_globally_limiting_filename_length/comment_5_2b49a293b044d0c2fcfe1701c76424c4._comment @@ -0,0 +1,10 @@ +[[!comment format=mdwn + username="interfect@b151490178830f44348aa57b77ad58c7d18e8fe7" + nickname="interfect" + subject="comment 5" + date="2016-09-21T22:49:55Z" + content=""" +OK, I'll try something like that. + +(Full disk encryption is still there; I think on one system I just have ecryptfs, because I want to be able to get in over ssh sometimes, and on one I have *both* FDE and ecryptfs on, because I enjoy performance penalties.) +"""]] -- cgit v1.2.3 From a514c8a79091dbe998cffe6de3e95821e0ac1273 Mon Sep 17 00:00:00 2001 From: JasonWoof Date: Thu, 22 Sep 2016 00:40:08 +0000 Subject: Added a comment: simpler use case --- ...ent_2_7255f5083283b0ae7e7fc41e127bd829._comment | 26 ++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 doc/todo/transitive_transfers/comment_2_7255f5083283b0ae7e7fc41e127bd829._comment diff --git a/doc/todo/transitive_transfers/comment_2_7255f5083283b0ae7e7fc41e127bd829._comment b/doc/todo/transitive_transfers/comment_2_7255f5083283b0ae7e7fc41e127bd829._comment new file mode 100644 index 000000000..54d9c8029 --- /dev/null +++ b/doc/todo/transitive_transfers/comment_2_7255f5083283b0ae7e7fc41e127bd829._comment @@ -0,0 +1,26 @@ +[[!comment format=mdwn + username="JasonWoof" + subject="simpler use case" + date="2016-09-22T00:40:07Z" + content=""" +Here's my use case (much simpler) + +Three git repos: + +desktop: normal checkout, source of almost all annexd files, commits, etc.. The only place I run git annex commands. Not enough space to stored all annexed files + +main_external: bare git repo, stores all annext file contents, but no file tree. Usually connected. Purpose: primary backups + +old_external: like main_external, except connected only occasionally. + + +I periodically copy from desktop to main_external. That's all well and good. + +The tricky part is when I plug in old_external and want to get everything on there. It's hard to get content onto old_external that is stored only on main_external. That's when I want to: + + git annex copy --from=main_external --to=old_external --not --in old_external + +Note that this would _not_ copy obsolete data (ie only referenced from old git commits) stored in old_external. I like that. + +To work around the lack of that feature, I try to keep coppies on desktop until I've had a chance to copy them to both external drives. It's good for numcopies, but I don't like trying to keep track of it, and I wish I could choose to let there be just one copy of things on main_external for replaceable data. +"""]] -- cgit v1.2.3 From ff2e8063f786fb3ed832e853d56c70e649f2bafe Mon Sep 17 00:00:00 2001 From: "https://anarc.at/openid/" Date: Thu, 22 Sep 2016 12:43:12 +0000 Subject: Added a comment: thanks for considering this! --- ...ent_3_e5c2ede203e7bdb5af8432df6c09268f._comment | 81 ++++++++++++++++++++++ 1 file changed, 81 insertions(+) create mode 100644 doc/todo/transitive_transfers/comment_3_e5c2ede203e7bdb5af8432df6c09268f._comment diff --git a/doc/todo/transitive_transfers/comment_3_e5c2ede203e7bdb5af8432df6c09268f._comment b/doc/todo/transitive_transfers/comment_3_e5c2ede203e7bdb5af8432df6c09268f._comment new file mode 100644 index 000000000..7023cd26a --- /dev/null +++ b/doc/todo/transitive_transfers/comment_3_e5c2ede203e7bdb5af8432df6c09268f._comment @@ -0,0 +1,81 @@ +[[!comment format=mdwn + username="https://anarc.at/openid/" + nickname="anarcat" + subject="thanks for considering this!" + date="2016-09-22T12:43:11Z" + content=""" +> (Let's not discuss the behavior of copy --to when the file is not +> locally present here; there is plenty of other discussion of that in +> eg http://bugs.debian.org/671179) + +Agreed, it's kind of secondary. + +> git-annex's special remote API does not allow remote-to-remote +> transfers without spooling it to a file on disk first. + +yeah, i noticed that when writing my own special remote. + +> And it's not possible to do using rsync on either end, AFAICS. + +That is correct. + +> It would be possible in some other cases but this would need to be +> implemented for each type of remote as a new API call. + +... and would fail for most, so there's little benefit there. + +how about a socket or FIFO of some sort? i know those break a lot of +semantics (e.g. `[ -f /tmp/fifo ]` fails in bash) but they could be a +solution... + +> Modern systems tend to have quite a large disk cache, so it's quite +> possible that going via a temp file on disk is not going to use a +> lot of disk IO to write and read it when the read and write occur +> fairly close together. + +true. there are also in-memory files that could be used, although I +don't think this would work across different process spaces. + +> The main benefit from streaming would probably be if it could run +> the download and the upload concurrently. + +for me, the main benefit would be to deal with low disk space +conditions, which is quite common on my machines: i often cram the +disk almost to capacity with good stuff i want to listen to +later... git-annex allows me to freely remove stuff when i need the +space, but it often means i am close to 99% capacity on the media +drives i use. + +> But that would only be a benefit sometimes. With an asymmetric +> connection, saturating the uplink tends to swamp downloads. Also, +> if download is faster than upload, it would have to throttle +> downloads (which complicates the remote API much more), or buffer +> them to memory (which has its own complications). + +that is true. + +> Streaming the download to the upload would at best speed things up +> by a factor of 2. It would probably work nearly as well to upload +> the previously downloaded file while downloading the next file. + +presented like that, it's true that the benefits of streaming are not +good enough to justify the complexity - the only problem is large +files and low local disk space... but maybe we can delegate that +solution to the user: \"free up at least enough space for one of those +files you want to transfer\". + +[... -J magic stuff ...] + +> And there is a complication with running that at the same time as eg +> git annex get of the same file. It would be surprising for get to +> succeed (because copy has already temporarily downloaded the file) +> and then have the file later get dropped. So, it seems that copy +> --from --to would need to stash the content away in a temp file +> somewhere instead of storing it in the annex proper. + +My thoughts exactly: actually copying the files to the local repo +introduces all sorts of weird --numcopies nastiness and race +conditions, it seems to me. + +thanks for considering this! +"""]] -- cgit v1.2.3