| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
Avoid ever using read to parse a non-haskell formatted input string.
show :: Key is arguably still show abuse, but displaying Keys as filenames
is just too useful to give up.
|
|
|
|
| |
Should have done this a long time ago.
|
|
|
|
|
| |
The last commit added some git-log calls to a merge. This removes some,
by only merging branches that have unique refs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Thanks Valentin Haenel for a test case showing how non-fast-forward merges
could result in an ongoing pull/merge/push cycle.
While the git-annex branch is fast-forwarded, git-annex's index file is still
updated using the union merge strategy as before. There's no other way to
update the index that would be any faster.
It is possible that a union merge and a fast-forward result in different file
contents: Files should have the same lines, but a union merge may change
their order. If this happens, the next commit made to the git-annex branch
will have some unnecessary changes to line orders, but the consistency
of data should be preserved.
Note that when the journal contains changes, a fast-forward is never attempted,
which is fine, because committing those changes would be vanishingly unlikely
to leave the git-annex branch at a commit that already exists in one of
the remotes.
The real difficulty is handling the case where multiple remotes have all
changed. git-annex does find the best (ie, newest) one and fast forwards
to it. If the remotes are diverged, no fast-forward is done at all. It would
be possible to pick one, fast forward to it, and make a merge commit to
the rest, I see no benefit to adding that complexity.
Determining the best of N changed remotes requires N*2+1 calls to git-log, but
these are fast git-log calls, and N is typically small. Also, typically
some or all of the remote refs will be the same, and git-log is not called to
compare those. In the real world I expect this will almost always add only
1 git-log call to the merge process. (Which already makes N anyway.)
|
|
|
|
|
| |
Fixes git annex init in a bare repository that already has a git-annex
branch.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Specifically, disabled trying to update the git-annex branch on the remote,
since that data is never used by operations that act on such remotes.
Also, when copying content to such a remote, skip committing the presence
information changes to its git-annex branch. Leaving it in the journal there
is ok: Any command run on the remote that needs the info will flush the
journal.
This may partially solve this bug:
http://git-annex.branchable.com/bugs/fails_to_handle_lot_of_files/
Although I still see unreaped git processes piling up when doing a copy --to.
|
| |
|
| |
|
|
|
|
| |
no code changes
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Another process may stage journalled files before the lock is
taken, so need to get the list of journalled files afterwards.
It's unfortunate this means getting the directory contents twice,
but it seems better to do that than sometimes take the lock
unnecessarily.
|
|
|
|
|
| |
And a theoretical fix to branchstate cache invalidation, but not a bug
that could actually happen.
|
|
|
|
|
| |
It was checking if it needed to merge on every branch access, fix it to
only check once.
|
|
|
|
| |
avoids git warning "error: duplicate parent xxx ignored"
|
|
|
|
| |
only write index once
|
| |
|
| |
|
|
|