| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
|
|
|
|
| |
Still lots to do to make sync handle direct mode, but this is a good first
step.
|
|
|
|
|
|
|
|
|
|
|
| |
However, I don't yet have a reliable way to deal with files being modified
while they're being transferred. I have code that detects it on the sending
side, but the receiver is still free to move the wrong content into its
annex, and record that it has the content. So that's not acceptable, and
I'll need to work on it some more.
However, at this point I can use a direct mode repository as a remote and
transfer files from and to it.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also for dropping objects in direct mode.
Checking presence reliably needs a cache of mtime, size, and inode.
This way, if a file is modified, keys that point to it are no longer
present.
Also, the code for restoring the symlink when removing objects is
unnecessarily messy. calcGitLink was generating links starting with
"../../remote/.git/", when running "git annex move --from remote".
I put in a workaround, but calcGitLink should probably be fixed.
There is not yet support for getting objects from repositories in direct
mode; it still looks for content in .git/annex/objects, and there's no
once place I can change to fix that.
Also, getting objects from direct mode repositories is problematic since
the can be changed while the object is being transferred. It probably needs
to quarantine it first.
|
| |
|
|
|
|
|
|
|
| |
* get/copy --auto: Transfer data even if it would exceed numcopies,
when preferred content settings want it.
* drop --auto: Fix dropping content when there are no preferred content
settings.
|
| |
|
|
|
|
|
| |
This includes keeping track of which buddies we're pairing with, to know
which PairAck are legitimate.
|
|
|
|
|
|
|
| |
a key's presence.
Also, use the new withQuietOutput function to avoid running the shell to
/dev/null stderr in two other places.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This cache prevented noticing changes made by another process.
The case I just ran into involved the assistant dropping a file, which
cached its presence info. Then the same file was downloaded again,
but the assistant didn't know its presence info had changed.
I don't see a way to keep this cache. Will instead rely on the OS level
file cache, for files in the journal. May need to add more higher-level
caching of info that it's ok to have a potentially stale copy of,
although much of git-annex already does so.
|
|
|
|
|
|
|
| |
When in a subdir, both the normal filepath, and the filepath relative to
the top of the git repo are needed for matching. The former for key lookup,
and the latter for include/exclude to match against. Previously, key lookup
didn't work in this situation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
The standalone build does not bundle its own ssh, so should be built
to support as wide an array of ssh versions as possible, so turn off
connection caching.
Unfortunatly, as implemented this forces a full rebuild when building the
standalone binary, and of course it makes it somewhat slower.
This is not ideal, but neither is probing the ssh version every time it's
run (slow), or once when initializing a repo (fragile).
|
|
|
|
|
|
|
|
|
|
|
|
| |
Setting GIT_INDEX_FILE clobbers the rest of the environment, making git
not read ~/.gitconfig, and blow up if GECOS didn't have a name for the
user.
I'm not entirely happy with getEnvironment being run every time now,
that's somewhat expensive. It may make sense to just set GIT_COMMITTER_*
and GIT_AUTHOR_*, but I worry that clobbering the rest could break PATH,
or GIT_PATH, or something else that might be used by a command run in here.
And caching the environment is not a good idea either; it can change..
|
| |
|
| |
|
|
|
|
|
|
|
| |
I'm using transfer for most things, both removable drives and cloud
storage, because it's the safest choice. We'll see if it makes sense
to prompt for the group when setting this up, or let the user pick
something else after the fact.
|
| |
|
|
|
|
|
|
|
|
|
| |
Both when queueing downloads, and uploads, consults the preferred content
settings.
I didn't make it check yet when requeing failed transfers or queuing
deferred downloads; dealing with the preferred content settings (or indeed,
other settings) changing while the assistant is running still needs work.
|
|
|
|
|
| |
this should make it easy to use from inside the assistant, where
everything is an AssociatedFile.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Made Git.LsFiles return cleanup actions, and everything waits on
processes now, except of course for Seek.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm down to 9 places in the code that can produce unwaited for zombies.
Most of these are pretty innocuous, at least for now, are only
used in short-running commands, or commands that run a set of
actions and explicitly reap zombies after each one.
The one from Annex.Branch.files could be trouble later,
since both Command.Fsck and Command.Unused can trigger it,
and the assistant will be doing those eventally. Ditto the one in
Git.LsTree.lsTree, which Command.Unused uses.
The only ones currently affecting the assistant though, are
in Git.LsFiles. Several threads use several of those.
(And yeah, using pipes or ResourceT would be a less ad-hoc approach,
but I don't really feel like ripping my entire code base apart right
now to change a foundation monad. Maybe one of these days..)
|
| |
|
|
|
|
|
|
|
|
|
| |
Fix resuming of downloads, which do not have a transfer info file to read.
When checking upload progress, use the MVar, rather than re-reading
the info file.
Catch exceptions in the transfer action. Required a tryAnnex.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Branch.get is not able to see changes that have been staged to the index
but not committed. This is a limitation of git cat-file --batch; when
reading from the index, as opposed to from a branch, it does not notice
changes made after the first time it reads the index.
So, had to revert the changes made in 1f73db3469e29448bcb1520893de11b23da6fb1f
to make annex.alwayscommit=false stage changes.
Also, ensure that Branch.change and Branch.get always see changes
at all points during a commit, by not deleting journal files when
staging to the index. Delete them only after committing the branch.
Before, there was a race during commits where a different git-annex
could see out-of-date info from the branch while a commit was in progress.
That's also done when updating the branch to merge in remote branches.
In the case where the local git-annex branch has had changes pushed into it
that are not yet reflected in the index, and there are journalled changes
as well, a merge commit has to be done.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commits used to be made to the git-annex branch whenever there were
journalled changes from a previous command, and the current command looked
up the value of a file. This no longer happens.
This means that transferkey, which is a oneshot command that stages
changes, can be run multiple times by the assistant, without each of them
committing the changes made by the command before. Which will be a lot
faster and use less space by batching up the commits.
Commits still happen if a remote git-annex branch has been changed and is
merged in.
|
|
|
|
|
| |
This interface is not a good idea, because a running git cat-file --batch
does not notice when existing files in the index are changed.
|
| |
|
| |
|
|
|
|
| |
too long (and use relative path to minimise path to the control socket).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Found a very cheap way to determine when a disconnected remote has
diverged, and has new content that needs to be transferred: Piggyback on
the git-annex branch update, which already checks for divergence.
However, this does not check if new content has appeared locally while
disconnected, that should be transferred to the remote.
Also, this does not handle cases where the two git repos are in sync,
but their content syncing has not caught up yet.
This code could have its efficiency improved:
* When multiple remotes are synced, if any one has diverged, they're
all queued for transfer scans.
* The transfer scanner could be told whether the remote has new content,
the local repo has new content, or both, and could optimise its scan
accordingly.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make Utility.Process wrap the parts of System.Process that I use,
and add debug logging to them.
Also wrote some higher-level code that allows running an action
with handles to a processes stdin or stdout (or both), and checking
its exit status, all in a single function call.
As a bonus, the debug logging now indicates whether the process
is being run to read from it, feed it data, chat with it (writing and
reading), or just call it for its side effect.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Test suite now passes with -threaded!
I traced back all the hangs with -threaded to System.Cmd.Utils. It seems
it's just crappy/unsafe/outdated, and should not be used. System.Process
seems to be the cool new thing, so converted all the code to use it
instead.
In the process, --debug stopped printing commands it runs. I may try to
bring that back later.
Note that even SafeSystem was switched to use System.Process. Since that
was a modified version of code from System.Cmd.Utils, it needed to be
converted too. I also got rid of nearly all calls to forkProcess,
and all calls to executeFile, which I'm also doubtful about working
well with -threaded.
|
| |
|
|
|
|
|
| |
Its truncation behavior is a red flag, so avoid using it in these places
where only raw ByteStrings are used, without looking at the data inside.
|
| |
|
| |
|
|
|
|
| |
Crazy like a fox..
|
|
|
|
|
| |
Increase queue size for update-index actions, because otherwise they'll
never be flushed.
|
|
|
|
| |
(oops, I had a calcMerge and a calc_merge!)
|
|
|
|
|
| |
May not work. Certianly needs to flush the queue from time to time
when only symlink changes are being made.
|