| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
|
|
|
| |
Currently only the web special remote is readonly, but it'd be possible to
also have readonly drives, or other remotes. These are handled in the
assistant by only downloading from them, and never trying to upload to
them.
|
|
|
|
| |
But do exclude them when pushing out changes.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The expensive transfer scan now scans a whole set of remotes in one pass.
So at startup, or when network comes up, it will run only once.
Note that this can result in transfers from/to higher cost remotes being
queued before other transfers of other content from/to lower cost remotes.
Before, low cost remotes were scanned first and all their transfers came
first. When multiple transfers are queued for a key, the lower cost ones
are still queued first. However, this could result in transfers from slow
remotes running for a long time while transfers of other data from faster
remotes waits.
I expect to make the transfer queue smarter about ordering
and/or make it allow multiple transfers at a time, which should eliminate
this annoyance. (Also, it was already possible to get into that situation,
for example if the network was up, lots of transfers from slow remotes
might be queued, and then a disk is mounted and its faster transfers have
to wait.)
Also note that this means I don't need to improve the code in
Assistant.Sync that currently checks if any of the reconnected remotes
have diverged, and if so, queues scans of all of them. That had been very
innefficient, but now doesn't matter.
|
|
|
|
|
|
|
| |
Used by the assistant, rather than copy, this is faster because it avoids
using git ls-files, avoids checking the location log redundantly, and
runs in oneshot mode, avoiding making a commit to the git-annex branch
for every file transferred.
|
|
|
|
|
| |
Since it turned out to make sense to always scan all remotes on startup,
there's no need to persist the info about which have been scanned.
|
|
|
|
| |
low cost ==> high priority
|
| |
|
|
|
|
| |
Or when a remote first becomes available after startup.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
There are multiple reasons to do this:
* The local network may be up solid, but a route to a networked remote
is having trouble. Any transfers to it that fail should be retried.
* Someone might have wicd running, but like to bring up new networks
by hand too. This way, it'll eventually notice them.
|
|
|
|
|
|
|
| |
The problem with using it here is that, if a removable drive is scanned
and gets disconnected during the scan, testing for all the files will
indicate it doesn't have them, and the scan is logged as completed
successfully, without necessary transfers being queued.
|
| |
|
|
|
|
| |
of a remote
|
| |
|
|
|
|
| |
This way, we get transfers from cheapest remotes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Found a very cheap way to determine when a disconnected remote has
diverged, and has new content that needs to be transferred: Piggyback on
the git-annex branch update, which already checks for divergence.
However, this does not check if new content has appeared locally while
disconnected, that should be transferred to the remote.
Also, this does not handle cases where the two git repos are in sync,
but their content syncing has not caught up yet.
This code could have its efficiency improved:
* When multiple remotes are synced, if any one has diverged, they're
all queued for transfer scans.
* The transfer scanner could be told whether the remote has new content,
the local repo has new content, or both, and could optimise its scan
accordingly.
|
| |
|
| |
|
|
|
|
|
|
| |
This deals with interruptions in network connectevity, by listening
for a new network interface coming up (using dbus to see when
network-manager or wicd do it), and forcing a rescan of
|
| |
|
|
|
|
|
| |
The resumed transfer still uses a slot, so will delay other, queued
transfers from starting.
|
|
|
|
|
| |
Currently waits for a new transfer slot to open up, which probably needs to
change..
|
| |
|
|
|
|
|
|
| |
A paused transfer's thread keeps running, keeping the slot in use.
This is intentional; pausing a transfer should not let other
queued transfers to run in its place.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This seems to work pretty well.
Handled the process groups like this:
- git-annex processes started by the assistant for transfers are run in their
own process groups.
- otherwise, rely on the shell to allocate a process group for git-annex
There is potentially a problem if some other program runs git-annex
directly (not using sh -c) The program and git-annex would then be in
the same process group. If that git-annex starts a transfer and it's
canceled, the program would also get killed. May or may not be a desired
result.
Also, the new updateTransferInfo probably closes a race where it was
possible for the thread id to not be recorded in the transfer info, if
the transfer info file from the transfer process is read first.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This doesn't quite work, because canceling a transfer sends a signal
to git-annex, but not to rsync (etc).
Looked at making git-annex run in its own process group, which could then
be killed, and would kill child processes. But, rsync checks if it's
process group is the foreground process group and doesn't show progress if
not, and when git has run git-annex, if git-annex makes a new process
group, that is not the case. Also, if git has run git-annex, ctrl-c
wouldn't be propigated to it if it made a new process group.
So this seems like a blind alley, but recording it here just in case.
|
|
|
|
|
|
|
|
|
|
| |
Should work (untested) for transfers being run by other processes.
Not yet by transfers being run by the assistant. killThread does not
kill processes forked off by a thread. To fix this, will probably
need to make `git annex getkey` and `git annex sendkey` commands that
operate on keys, and write their own transfer info. Then the assistant
can run them, and kill them, as needed.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit includes a paydown on technical debt incurred two years ago,
when I didn't know that it was bad to make custom Read and Show instances
for types. As the routes need Read and Show for Transfer, which includes a
Key, and deriving my own Read instance of key was not practical,
I had to finally clean that up.
So the compact Key read and show functions are now file2key and key2file,
and Read and Show are now derived instances.
Changed all code that used the old instances, compiler checked.
(There were a few places, particularly in Command.Unused, and the test
suite where the Show instance continue to be used for legitimate
comparisons; ie show key_x == show key_y (though really in a bloom filter))
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
The just created repo has no master branch commits yet. This is now
handled, merging in the master branch from the populated drive.
|
|
|
|
|
| |
De-emphasize "clone", because it's not that simple. The removable drive may
already have an annex with content; if so it'll get synced in.
|
|
|
|
| |
branch yet
|
|
|
|
|
|
|
|
|
| |
The TMVar is supposed to be left empty once the map is empty, but the code
neglected to do that, so the next time takeMVar got an empty map, which
is not handled since that was supposed to never happen..
Also, avoid any possibility of this crash. If an empty map somehow creeps
in, just retry.
|
| |
|
| |
|