| Commit message (Collapse) | Author | Age |
| |
|
| |
|
|
|
|
| |
Avoids flicker to 0% when resuming a paused transfer.
|
|
|
|
|
|
|
|
|
| |
The code to maintain that TChan in parallel with the list was buggy,
the two were not always the same. And all that TChan was needed for was
blocking on the next transfer, which can be accomplished just as well by
checking the size and retrying, thanks to STM.
Also, this is faster, and uses less memory. Total win.
|
|
|
|
| |
don't want to stomp over fields other than the ones being changed
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
I had an intuition that throwTo might be blocking because an exception was
caught and the exception handler was running. This seems to be the case,
and is avoided by using try. However, I can't really find anywhere in
throwTo's documentation that justifies this behavior.
|
| |
|
|
|
|
|
|
| |
When multiple downloads of a key are queued, it starts the first, but leaves the
other downloads in the queue. This ensures that we don't lose a queued
download if the one that got started failed.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Change alterTransferInfo to not merge in old values, including
transferPaused.
|
| |
|
|
|
|
|
| |
The poller only alters, to avoid re-adding transfers that get removed.
The watcher updates, to add new transfers.
|
|
|
|
|
| |
Don't display redundant queued downloads. The only problem with this is
that it reduces the total number of queued transfers the webapp displays.
|
|
|
|
|
| |
Since a failed transfer gets retried until it succeeds, no point in
bothering the user about them.
|
|
|
|
|
|
|
|
| |
Run code that pops off the next queued transfer and adds it to the active
transfer map within an allocated transfer slot, rather than before
allocating a slot. Fixes the transfers display, which had been displaying
the next transfer as a running transfer, while the previous transfer was
still running.
|
|
|
|
| |
Doesn't fix the bug I thought it'd fix, but is clearly correct.
|
| |
|
|
|
|
| |
Fixes display of the remote name in the dashboard.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Currently only the web special remote is readonly, but it'd be possible to
also have readonly drives, or other remotes. These are handled in the
assistant by only downloading from them, and never trying to upload to
them.
|
|
|
|
| |
But do exclude them when pushing out changes.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The expensive transfer scan now scans a whole set of remotes in one pass.
So at startup, or when network comes up, it will run only once.
Note that this can result in transfers from/to higher cost remotes being
queued before other transfers of other content from/to lower cost remotes.
Before, low cost remotes were scanned first and all their transfers came
first. When multiple transfers are queued for a key, the lower cost ones
are still queued first. However, this could result in transfers from slow
remotes running for a long time while transfers of other data from faster
remotes waits.
I expect to make the transfer queue smarter about ordering
and/or make it allow multiple transfers at a time, which should eliminate
this annoyance. (Also, it was already possible to get into that situation,
for example if the network was up, lots of transfers from slow remotes
might be queued, and then a disk is mounted and its faster transfers have
to wait.)
Also note that this means I don't need to improve the code in
Assistant.Sync that currently checks if any of the reconnected remotes
have diverged, and if so, queues scans of all of them. That had been very
innefficient, but now doesn't matter.
|
|
|
|
|
|
|
| |
Used by the assistant, rather than copy, this is faster because it avoids
using git ls-files, avoids checking the location log redundantly, and
runs in oneshot mode, avoiding making a commit to the git-annex branch
for every file transferred.
|
|
|
|
|
| |
Since it turned out to make sense to always scan all remotes on startup,
there's no need to persist the info about which have been scanned.
|
|
|
|
| |
low cost ==> high priority
|
| |
|
|
|
|
| |
Or when a remote first becomes available after startup.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
There are multiple reasons to do this:
* The local network may be up solid, but a route to a networked remote
is having trouble. Any transfers to it that fail should be retried.
* Someone might have wicd running, but like to bring up new networks
by hand too. This way, it'll eventually notice them.
|
|
|
|
|
|
|
| |
The problem with using it here is that, if a removable drive is scanned
and gets disconnected during the scan, testing for all the files will
indicate it doesn't have them, and the scan is logged as completed
successfully, without necessary transfers being queued.
|
| |
|
|
|
|
| |
of a remote
|
| |
|
|
|
|
| |
This way, we get transfers from cheapest remotes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Found a very cheap way to determine when a disconnected remote has
diverged, and has new content that needs to be transferred: Piggyback on
the git-annex branch update, which already checks for divergence.
However, this does not check if new content has appeared locally while
disconnected, that should be transferred to the remote.
Also, this does not handle cases where the two git repos are in sync,
but their content syncing has not caught up yet.
This code could have its efficiency improved:
* When multiple remotes are synced, if any one has diverged, they're
all queued for transfer scans.
* The transfer scanner could be told whether the remote has new content,
the local repo has new content, or both, and could optimise its scan
accordingly.
|