| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the case where the pointer file is in place, and not the content
of the object, lock's performNew was called with filemodified=True,
which caused it to try to repopulate the object from an unmodified
associated file, of which there were none. So, the content of the object
got thrown away incorrectly. This was the cause (although not the root
cause) of data loss in https://github.com/datalad/datalad/issues/1020
The same problem could also occur when the work tree file is modified,
but the object is not, and lock is called with --force. Added a test case
for this, since it's excercising the same code path and is easier to set up
than the problem above.
Note that this only occurred when the keys database did not have an inode
cache recorded for the annex object. Normally, the annex object would be in
there, but there are of course circumstances where the inode cache is out
of sync with reality, since it's only a cache.
Fixed by checking if the object is unmodified; if so we don't need to
try to repopulate it. This does add an additional checksum to the unlock
path, but it's already checksumming the worktree file in another case,
so it doesn't slow it down overall.
Further investigation found a similar problem occurred when smudge --clean
is called on a file and the inode cache is not populated. cleanOldKeys
deleted the unmodified old object file in this case. This was also
fixed by checking if the object is unmodified.
In general, use of getInodeCaches and sameInodeCache is potentially
dangerous if the inode cache has not gotten populated for some reason.
Better to use isUnmodified. I breifly auited other places that check the
inode cache, and did not see any immediate problems, but it would be easy
to miss this kind of problem.
|
|
|
|
| |
This commit was sponsored by Denis Dzyubenko on Patreon.
|
| |
|
|
|
|
|
| |
I've eyeballed all --json commands, and the only difference should be
that some fields are re-ordered.
|
|
|
|
|
|
| |
Reversion introduced by v6 mode support, affects v5 too.
Also fix a similar crash when the webapp is used to delete a repository.
|
|
|
|
| |
http://git-annex.branchable.com/bugs/Assistant_keeps_deleting_all_the_files_in_my_repo/
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
git annex adjust --force will overwrite any current adjusted branch.
I didn't document this because for the user, deleting the branch is just as
good.
|
| |
|
|
|
|
|
|
| |
When git-annex is used with a git version older than 2.2.0, disable support for
adjusted branches, since GIT_COMMON_DIR is needed to update them and was first
added in that version of git.
|
| |
|
|
|
|
| |
HOME
|
| |
|
|
|
|
| |
This is not really extensive enough, but a start..
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
The crash turned out to be caused by the sqlite database being deleted out
from under sqlite before it was done with it. Since multiple git_annex
calls are done in the same process while running the test suite, the
DbHandle could linger until GCed, and the test repo, and thus sqlite
database be deleted before the workerThread was done.
|
|
|
|
| |
ugly escaping, as the unicode method doesn't work on non-unicode supporting systems.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
(but not git rmed). git still has the add staged in this case, so the content should not be unused and was wrongly treated as such.
So, we need to look at both the file on disk to see if it's a annex link,
and the file in the index too. lookupFile doesn't look in the index if the file
is not present on disk.
|
|
|
|
| |
fix is not relevant for unlocked files
|
|
|
|
|
|
|
| |
have to change the content of unlocked file before committing
otherwise git commit will fail in v6 mode when the file was already
unlocked, because no changes have been made
|
|
|
|
| |
file is locked here, so use right test.
|
|
|
|
|
|
|
|
| |
In v5, that was not possible, but it is in v6, and so the test was failing.
Investigating, it turns out that locking was copying the pointer file
content to the annex object despite the content not being present. So,
add a check to prevent that.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Many failures.
|
| |
|
|
|
|
|
| |
Set annex.largefiles when adding the conflicting non-annexed file,
otherwise it would be added as an annexed file.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
WorkTree.lookupFile was finding a key for a file that's deleted from the
work tree, which is different than the v5 behavior (though perhaps the same
as the direct mode behavior). Fix by checking that the work tree file exists
before catting its key.
Hopefully this won't slow down much, probably the catKey is much more expensive.
I can't see any way to optimise this, except perhaps to make Command.Unused
check if work tree files exist before/after calling lookupFile. But,
it seems better to make lookupFile really only find keys for worktree files;
that's what it's intended to do.
|
|
|
|
| |
The ingitfile was having git run it through the clean filter in some cases.
|
|
|
|
| |
Need to run that check inside a annex repo.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|