| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a RemoteChecker thread, that waits for problems to be reported with
remotes, and checks if their git repository is in need of repair.
Currently, only failures to sync with the remote cause a problem to be
reported. This seems enough, but we'll see.
Plugging in a removable drive with a repository on it that is corrupted
does automatically repair the repository, as long as the corruption causes
git push or git pull to fail. Some types of corruption do not, eg
missing/corrupt objects for blobs that git push doesn't need to look at.
So, this is not really a replacement for scheduled git repository fscking.
But it does make the assistant more robust.
This commit is sponsored by Fernando Jimenez.
|
|
|
|
|
|
|
|
| |
(eg, on removable drives)
gcrypt remotes are not yet handled.
This commit was sponsored by Sören Brunk.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently only implemented for local git remotes. May try to add support
to git-annex-shell for ssh remotes later. Could concevably also be
supported by some special remote, although that seems unlikely.
Cronner user this when available, and when not falls back to
fsck --fast --from remote
git annex fsck --from does not itself use this interface.
To do so, I would need to pass --fast and all other options that influence
fsck on to the git annex fsck that it runs inside the remote. And that
seems like a lot of work for a result that would be no better than
cd remote; git annex fsck
This may need to be revisited if git-annex-shell gets support, since it
may be the case that the user cannot ssh to the server to run git-annex
fsck there, but can run git-annex-shell there.
This commit was sponsored by Damien Diederen.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turns out that a lot of the time spent in a bulk add was just updating the
add alert to rotate through each file that was added. Showing one alert
makes for a significant speedup.
Also, when the webapp is open, this makes it take quite a lot less cpu
during bulk adds.
Also, it lets the user know when a bulk add happened, which is sorta
nice..
|
|
|
|
|
|
|
| |
In the case of the inotify limit warning, particularly, if it happens once
it will be happening repeatedly, and so combining alerts resulted in a
much too large alert message that took up a lot of memory and was too
large for the webapp to display.
|
|
|
|
| |
and remove any activity alerts
|
| |
|
| |
|
| |
|
|
|
|
| |
Needs fixes to build when the webapp is disabled.
|
|
|
|
|
| |
This way it's only visible when transfers are not running, which is about
what a user would expect.
|
|
|
|
|
|
|
| |
Like the old one, but does not mention which remotes are scanned.
I think this is less confusing, as it does not imply the remotes
were somehow accessed (which they are not; inaccessible remotes
can be scanned.)
|
| |
|
|
|
|
| |
syncing with a remote fails.
|
| |
|
|
|
|
| |
repository needs to be configured.
|
|
|
|
|
| |
The pairing complete alert had been conbining with some other alert, fixed
this and now it's displayed once xmpp pairing is complete on both sides.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
This includes keeping track of which buddies we're pairing with, to know
which PairAck are legitimate.
|
| |
|
|
|
|
|
| |
New 0.5 changes the api, rather gratuitously, so run away. I can juse use
Hamlet here.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
I am befuddled that Twitter Bootstrap has no built-in Icon for The Cloud,
and also that Chromium's depiction of CLOUD (U+2601) has an uncanny
resemblance to PILE OF POO (U+1F4A9) when rendered small, and looks like a
looming Frankenstorm when rendered large, and not a sweet, sunny, nothing
can go wrong The Cloud.
<http://www.fileformat.info/info/unicode/char/2601/browsertest.htm>
So, I must resort to irony in my choice of icons.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Finally.
Last bug fixes here: Send PairResp with same UUID in the PairReq.
Fix off-by-one in code that filters out our own pairing messages.
Also reworked the pairing alerts, which are still slightly buggy.
|
| |
|
|
|
|
| |
Has a button to cancel the request.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
They work fine. But I had to go to a lot of trouble to get Yesod to render
routes in a pure function. It may instead make more sense to have each
alert have an assocated IO action, and a single route that runs the IO
action of a given alert id. I just wish I'd realized that before the past
several hours of struggling with something Yesod really doesn't want to
allow.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The expensive transfer scan now scans a whole set of remotes in one pass.
So at startup, or when network comes up, it will run only once.
Note that this can result in transfers from/to higher cost remotes being
queued before other transfers of other content from/to lower cost remotes.
Before, low cost remotes were scanned first and all their transfers came
first. When multiple transfers are queued for a key, the lower cost ones
are still queued first. However, this could result in transfers from slow
remotes running for a long time while transfers of other data from faster
remotes waits.
I expect to make the transfer queue smarter about ordering
and/or make it allow multiple transfers at a time, which should eliminate
this annoyance. (Also, it was already possible to get into that situation,
for example if the network was up, lots of transfers from slow remotes
might be queued, and then a disk is mounted and its faster transfers have
to wait.)
Also note that this means I don't need to improve the code in
Assistant.Sync that currently checks if any of the reconnected remotes
have diverged, and if so, queues scans of all of them. That had been very
innefficient, but now doesn't matter.
|
| |
|
| |
|
| |
|