summaryrefslogtreecommitdiff
path: root/Command
Commit message (Collapse)AuthorAge
...
| | * adjusted branches, proof of conceptGravatar Joey Hess2016-02-25
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "git annex adjust" may be a temporary interface, but works for a proof of concept. It is pretty fast at creating the adjusted branch. The main overhead is injecting pointer files. It might be worth optimising that by reusing the symlink target as the pointer file content. When I tried to do that, the problem was that the clean filter doesn't use that same format, and so git thought files had changed. Could be dealt with, perhaps make the clean filter use symlink format for pointer files when on an adjusted branch? But the real overhead is in checking out the branch, when git runs the smudge filter once per file. That is perhaps too slow to be usable, although it may only affect initial checkout of the branch, and not updates. TBD.
* | info: Mention when run in a dead repository.Gravatar Joey Hess2016-02-19
| |
* | fsck: When the only copy of a file is in a dead repository, mention the ↵Gravatar Joey Hess2016-02-19
| | | | | | | | repository.
* | annex.addunlockedGravatar Joey Hess2016-02-16
| | | | | | | | | | | | | | * add, addurl, import, importfeed: When in a v6 repository on a crippled filesystem, add files unlocked. * annex.addunlocked: New configuration setting, makes files always be added unlocked. (v6 only)
| * move old ghc compat code into separate module; eliminate WITH_CLIBSGravatar Joey Hess2016-02-15
|/ | | | | This avoids hsc2hs being run except when building for the old version of ghc. Should speed up builds.
* fsck: Populate unlocked files in v6 repositories whose content is present in ↵Gravatar Joey Hess2016-02-14
| | | | | | annex/objects but didn't reach the work tree. This also handles fixing up after f9dfeaf801da2e4d5879b3de5895dc3cef68a329
* fsck: Detect and fix missing associated file mappings in v6 repositories.Gravatar Joey Hess2016-02-14
| | | | | This also handles fixing up after the bad data written by f9dfeaf801da2e4d5879b3de5895dc3cef68a329.
* files with only 1 linkCount may still be unlockedGravatar Joey Hess2016-02-14
| | | | When on crippled filesystem, or without annex.thin set.
* clean upGravatar Joey Hess2016-02-14
|
* avoid --batch crashing if a remote fails to be accessedGravatar Joey Hess2016-02-12
|
* checkpresentkey: Allow to be run without an explicit remote and add --batchGravatar Joey Hess2016-02-12
| | | | | * checkpresentkey: Allow to be run without an explicit remote. * checkpresentkey: Added --batch.
* matchexpression: Added --largefiles option to parse an annex.largefiles ↵Gravatar Joey Hess2016-02-03
| | | | expression.
* Limit annex.largefiles parsing to the subset of preferred content ↵Gravatar Joey Hess2016-02-03
| | | | | | expressions that make sense in its context. So, not "standard" or "lackingcopies", etc.
* annex.largefiles can be configured in .gitattributes tooGravatar Joey Hess2016-02-02
| | | | | This is particulary useful for v6 repositories, since the .gitattributes configuration will apply in all clones of the repository.
* Fix reversion in lookupkey, contentlocation, and examinekey which caused ↵Gravatar Joey Hess2016-01-29
| | | | them to sometimes output side messages.
* annex.addsmallfiles: New option controlling what is done when adding files ↵Gravatar Joey Hess2016-01-28
| | | | not matching annex.largefiles.
* Ord constraint redundantGravatar Gabor Greif2016-01-28
|
* Fix nasty reversion in the last release that broke sync --content's handling ↵Gravatar Joey Hess2016-01-26
| | | | | | | | | | | | | | of many preferred content expressions. The type checker should have noticed this, but the changes to mapM that make it accept any Traversable hid the fact that it was not being passed a list at all. Thus, what should have returned an empty list most of the time instead returned [""] which was treated as the name of the associated file, with disasterout consequences. When I have time, I should add a test case checking what sync --content drops. I should also consider replacing mapM with one re-specialized to lists.
* remove 3 build flagsGravatar Joey Hess2016-01-26
| | | | | | | | | | | | | * Removed the webapp-secure build flag, rolling it into the webapp build flag. * Removed the quvi and tahoe build flags, which only adds aeson to the core dependencies. * Removed the feed build flag, which only adds feed to the core dependencies. Build flags have cost in both code complexity and also make Setup configure have to work harder to find a usable set of build flags when some dependencies are missing.
* matchexpression: New plumbing command to check if a preferred content ↵Gravatar Joey Hess2016-01-25
| | | | expression matches some data.
* remove 163 lines of code without changing anything except importsGravatar Joey Hess2016-01-20
|
* make noMessages disable closing of json object in --json modeGravatar Joey Hess2016-01-20
| | | | | | | | | | | This allows things like Command.Find to use noMessages and generate their own complete json objects. Previouly, Command.Find managed that only via a hack, which wasn't compatable with batch mode. Only Command.Find, Command.Smudge, and Commange.Status use noMessages currently, and none except for Command.Find are impacted by this change. Fixes find --json --batch output
* remove unused importsGravatar Joey Hess2016-01-20
|
* remove no longer needed noMessagesGravatar Joey Hess2016-01-20
| | | | | All three of these are using batch mode to drive their processing, so there is no automatic output, and noMessages is no longer needed.
* find --batchGravatar Joey Hess2016-01-20
|
* remove excess spaceGravatar Joey Hess2016-01-20
|
* whereis --batchGravatar Joey Hess2016-01-20
|
* add --batchGravatar Joey Hess2016-01-19
|
* refactorGravatar Joey Hess2016-01-19
|
* registerurl: Check if a remote claims the url, same as addurl does.Gravatar Joey Hess2016-01-19
|
* addurl --json: Include field for added keyGravatar Joey Hess2016-01-19
| | | | | | (unless the file was added directly to git due to annex.largefiles configuration.) (Also done by add --json and import --json)
* add, import: Support --json output.Gravatar Joey Hess2016-01-19
| | | | Include added key in output.
* info: Support --batch mode.Gravatar Joey Hess2016-01-15
|
* convert existing non-annexed file to non-exceptionGravatar Joey Hess2016-01-15
|
* whereis --json: Urls are now listed inside the remote that claims them, ↵Gravatar Joey Hess2016-01-15
| | | | rather than all together at the end.
* addurl: Refuse to overwrite any existing, non-annexed file.Gravatar Joey Hess2016-01-13
|
* addurl: Support --json, particularly useful in --batch mode.Gravatar Joey Hess2016-01-13
|
* change keys database to use IKey type with more efficient serializationGravatar Joey Hess2016-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This breaks any existing keys database! IKey serializes more efficiently than SKey, although this limits the use of its Read/Show instances. This makes the keys database use less disk space, and so should be a win. Updated benchmark: benchmarking keys database/getAssociatedFiles from 1000 (hit) time 64.04 μs (63.95 μs .. 64.13 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 64.02 μs (63.96 μs .. 64.08 μs) std dev 218.2 ns (172.5 ns .. 299.3 ns) benchmarking keys database/getAssociatedFiles from 1000 (miss) time 52.53 μs (52.18 μs .. 53.21 μs) 0.999 R² (0.998 R² .. 1.000 R²) mean 52.31 μs (52.18 μs .. 52.91 μs) std dev 734.6 ns (206.2 ns .. 1.623 μs) benchmarking keys database/getAssociatedKey from 1000 (hit) time 64.60 μs (64.46 μs .. 64.77 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 64.74 μs (64.57 μs .. 65.20 μs) std dev 900.2 ns (389.7 ns .. 1.733 μs) benchmarking keys database/getAssociatedKey from 1000 (miss) time 52.46 μs (52.29 μs .. 52.68 μs) 1.000 R² (0.999 R² .. 1.000 R²) mean 52.63 μs (52.35 μs .. 53.37 μs) std dev 1.362 μs (562.7 ns .. 2.608 μs) variance introduced by outliers: 24% (moderately inflated) benchmarking keys database/addAssociatedFile to 1000 (old) time 487.3 μs (484.7 μs .. 490.1 μs) 1.000 R² (0.999 R² .. 1.000 R²) mean 490.9 μs (487.8 μs .. 496.5 μs) std dev 13.95 μs (6.841 μs .. 22.03 μs) variance introduced by outliers: 20% (moderately inflated) benchmarking keys database/addAssociatedFile to 1000 (new) time 6.633 ms (5.741 ms .. 7.751 ms) 0.905 R² (0.850 R² .. 0.965 R²) mean 8.252 ms (7.803 ms .. 8.602 ms) std dev 1.126 ms (900.3 μs .. 1.430 ms) variance introduced by outliers: 72% (severely inflated) benchmarking keys database/getAssociatedFiles from 10000 (hit) time 65.36 μs (64.71 μs .. 66.37 μs) 0.998 R² (0.995 R² .. 1.000 R²) mean 65.28 μs (64.72 μs .. 66.45 μs) std dev 2.576 μs (920.8 ns .. 4.122 μs) variance introduced by outliers: 42% (moderately inflated) benchmarking keys database/getAssociatedFiles from 10000 (miss) time 52.34 μs (52.25 μs .. 52.45 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 52.49 μs (52.42 μs .. 52.59 μs) std dev 255.4 ns (205.8 ns .. 312.9 ns) benchmarking keys database/getAssociatedKey from 10000 (hit) time 64.76 μs (64.67 μs .. 64.84 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 64.67 μs (64.62 μs .. 64.72 μs) std dev 177.3 ns (148.1 ns .. 217.1 ns) benchmarking keys database/getAssociatedKey from 10000 (miss) time 52.75 μs (52.66 μs .. 52.82 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 52.69 μs (52.63 μs .. 52.75 μs) std dev 210.6 ns (173.7 ns .. 265.9 ns) benchmarking keys database/addAssociatedFile to 10000 (old) time 489.7 μs (488.7 μs .. 490.7 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 490.4 μs (489.6 μs .. 492.2 μs) std dev 3.990 μs (2.435 μs .. 7.604 μs) benchmarking keys database/addAssociatedFile to 10000 (new) time 9.994 ms (9.186 ms .. 10.74 ms) 0.959 R² (0.928 R² .. 0.979 R²) mean 9.906 ms (9.343 ms .. 10.40 ms) std dev 1.384 ms (1.051 ms .. 2.100 ms) variance introduced by outliers: 69% (severely inflated)
* add benchmarks of adding an associated fileGravatar Joey Hess2016-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | benchmarking keys database/addAssociatedFile to 1000 (old) time 516.1 μs (514.7 μs .. 517.4 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 514.0 μs (512.1 μs .. 515.2 μs) std dev 4.740 μs (2.972 μs .. 7.068 μs) benchmarking keys database/addAssociatedFile to 1000 (new) time 5.750 ms (4.857 ms .. 6.885 ms) 0.815 R² (0.698 R² .. 0.904 R²) mean 7.858 ms (7.311 ms .. 8.421 ms) std dev 1.684 ms (1.383 ms .. 2.027 ms) variance introduced by outliers: 88% (severely inflated) benchmarking keys database/addAssociatedFile to 10000 (old) time 515.7 μs (514.8 μs .. 516.5 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 515.4 μs (513.7 μs .. 516.6 μs) std dev 4.824 μs (2.957 μs .. 7.167 μs) benchmarking keys database/addAssociatedFile to 10000 (new) time 8.934 ms (7.779 ms .. 10.05 ms) 0.868 R² (0.751 R² .. 0.934 R²) mean 11.51 ms (10.66 ms .. 12.26 ms) std dev 2.174 ms (1.816 ms .. 2.747 ms) variance introduced by outliers: 82% (severely inflated)
* refactorGravatar Joey Hess2016-01-12
|
* add database benchmarkGravatar Joey Hess2016-01-12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The benchmark shows that the database access is quite fast indeed! And, it scales linearly to the number of keys, with one exception, getAssociatedKey. Based on this benchmark, I don't think I need worry about optimising for cases where all files are locked and the database is mostly empty. In those cases, database access will be misses, and according to this benchmark, should add only 50 milliseconds to runtime. (NB: There may be some overhead to getting the database opened and locking the handle that this benchmark doesn't see.) joey@darkstar:~/src/git-annex>./git-annex benchmark setting up database with 1000 setting up database with 10000 benchmarking keys database/getAssociatedFiles from 1000 (hit) time 62.77 μs (62.70 μs .. 62.85 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 62.81 μs (62.76 μs .. 62.88 μs) std dev 201.6 ns (157.5 ns .. 259.5 ns) benchmarking keys database/getAssociatedFiles from 1000 (miss) time 50.02 μs (49.97 μs .. 50.07 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 50.09 μs (50.04 μs .. 50.17 μs) std dev 206.7 ns (133.8 ns .. 295.3 ns) benchmarking keys database/getAssociatedKey from 1000 (hit) time 211.2 μs (210.5 μs .. 212.3 μs) 1.000 R² (0.999 R² .. 1.000 R²) mean 211.0 μs (210.7 μs .. 212.0 μs) std dev 1.685 μs (334.4 ns .. 3.517 μs) benchmarking keys database/getAssociatedKey from 1000 (miss) time 173.5 μs (172.7 μs .. 174.2 μs) 1.000 R² (0.999 R² .. 1.000 R²) mean 173.7 μs (173.0 μs .. 175.5 μs) std dev 3.833 μs (1.858 μs .. 6.617 μs) variance introduced by outliers: 16% (moderately inflated) benchmarking keys database/getAssociatedFiles from 10000 (hit) time 64.01 μs (63.84 μs .. 64.18 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 64.85 μs (64.34 μs .. 66.02 μs) std dev 2.433 μs (547.6 ns .. 4.652 μs) variance introduced by outliers: 40% (moderately inflated) benchmarking keys database/getAssociatedFiles from 10000 (miss) time 50.33 μs (50.28 μs .. 50.39 μs) 1.000 R² (1.000 R² .. 1.000 R²) mean 50.32 μs (50.26 μs .. 50.38 μs) std dev 202.7 ns (167.6 ns .. 252.0 ns) benchmarking keys database/getAssociatedKey from 10000 (hit) time 1.142 ms (1.139 ms .. 1.146 ms) 1.000 R² (1.000 R² .. 1.000 R²) mean 1.142 ms (1.140 ms .. 1.144 ms) std dev 7.142 μs (4.994 μs .. 10.98 μs) benchmarking keys database/getAssociatedKey from 10000 (miss) time 1.094 ms (1.092 ms .. 1.096 ms) 1.000 R² (1.000 R² .. 1.000 R²) mean 1.095 ms (1.095 ms .. 1.097 ms) std dev 4.277 μs (2.591 μs .. 7.228 μs)
* rekey: No longer copies over urls from the old to the new key.Gravatar Joey Hess2016-01-07
| | | | | It makes sense for migrate to do that, but not for this low-level (and little used) plumbing command to.
* avoid confusing git with a modified ctime in clean filterGravatar Joey Hess2016-01-07
| | | | | | | Linking the file to the tmp dir was not necessary in the clean filter, and it caused the ctime to change, which caused git to think the file was changed. This caused git status to get slow as it kept re-cleaning unchanged files.
* migrate and rekey v6 unlocked file supportGravatar Joey Hess2016-01-07
|
* migrate: Copy over metadata to new key.Gravatar Joey Hess2016-01-07
|
* unused: deal with v6 unlocked file that is implicitly ingested by git diff etcGravatar Joey Hess2016-01-06
|
* cleanupGravatar Joey Hess2016-01-06
|
* optimiseGravatar Joey Hess2016-01-06
| | | | | | | | | | | | d1ce927d95fe7c331cbff3317797a60aa288738b put a cat-file into the fast bloomfilter generation path. Instead, add another bloom filter which diffs from the work tree to the index. Also, pull the sha of the changed object out of the diffs, and cat that object directly, rather than indirecting through the filename. Finally, removed some hacks that are unncessary thanks to the worktree to index diff.
* fix parsing of v6 unlocked fileGravatar Joey Hess2016-01-06
| | | | The newline broke this ad-hoc parser; use the normal one.
* unused: Bug fix when a new file was added to the annex, and then removed ↵Gravatar Joey Hess2016-01-06
| | | | | | | | (but not git rmed). git still has the add staged in this case, so the content should not be unused and was wrongly treated as such. So, we need to look at both the file on disk to see if it's a annex link, and the file in the index too. lookupFile doesn't look in the index if the file is not present on disk.