From 1f6cfecc972b121fa42ea80383183bbaccc2195a Mon Sep 17 00:00:00 2001 From: Joey Hess Date: Thu, 29 May 2014 15:23:05 -0400 Subject: remove old closed bugs and todo items to speed up wiki updates and reduce size Remove closed bugs and todos that were least edited before 2014. Command line used: for f in $(grep -l '\[\[done\]\]' *.mdwn); do if [ -z $(git log --since=2014 --pretty=oneline "$f") ]; then git rm $f; git rm -rf $(echo "$f" | sed 's/.mdwn$//'); fi; done --- doc/bugs/feature_request:_addhash.mdwn | 29 ----------------------------- 1 file changed, 29 deletions(-) delete mode 100644 doc/bugs/feature_request:_addhash.mdwn (limited to 'doc/bugs/feature_request:_addhash.mdwn') diff --git a/doc/bugs/feature_request:_addhash.mdwn b/doc/bugs/feature_request:_addhash.mdwn deleted file mode 100644 index e818a3327..000000000 --- a/doc/bugs/feature_request:_addhash.mdwn +++ /dev/null @@ -1,29 +0,0 @@ -### Use case 1 - -I have a big repo using a SHA256E back-end. Along comes a new shiny SKEIN512E back-end and I would like to transition to using that, because it's faster and moves from ridiculously to ludicrously low risk of collisions. - -I can set `.gitattributes` to use the new back-end for any new files added, but then I when I import some arbitrary mix of existing and new files to the repo it will not deduplicate any more, it will add all the files under the new hash scheme. - -### Use case 2 - -I have a big repo of files I have added using `addurl --fast`. I download the files, and they are in the repo. - - - I cannot verify later that none of them have been damaged. - - If I come across an offline collection of some of the files, I cannot easily get them into the annex by a simple import. - -### Workaround - -In both these cases, what I can do is unlock (maybe?) or unannex (definitely) all of the files, and then re-add them using the new hash use `migrate` to relink the files using the new scheme. In both use cases this means I now risk having duplicates in various clones of the repo, and would have to clean them up with `drop-unused` -- after first having re-copied them from a repo that has them under the new hash or `migrate`d them in each clone using the pre-migration commit; Either way is problematic for special remotes, in particular glacier. I also lose the continuity of the history of that object. - -In use case 2 I also lose the URLs of the files and would have to re-add them using `addurl`. This is probably not true when using `migrate`. - -... which brings me to the proposed feature. - -### addhash - -Symmetrical to `addurl`, which in one form can take an existing hashed (or URL-sourced) file and add an URL source to it, `addhash` can take an existing URL-sourced (or hashed) file and add a hash reference to it (given that the file is in the annex so that the hash may be calculated) -- an alias under which it may also be identified, in addition to the existing URL or hash. - - - Any file added to the annex after `addhash` will use the symlink name of the original hash if their hash matches the `addhash`ed one. - - An `fsck` run will use one of the available hashes to verify the integrity of the file, maybe according to some internal order of preference, or possibly a configurable one. - -> [[done]] --[[Joey]] -- cgit v1.2.3