| Commit message (Collapse) | Author | Age |
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pairing process separately.
I was able to reproduce something very like this bug by starting
pairing separately on both computers under poor network conditions (ie,
weak wifi on my front porch). Neither computer showed an alert for the
PairReq messages it was seeing (intermittently) from the other.
So, I've made a new PairReq message that has not been seen before
always make the alert pop up, even if the assistant thinks it is
in the middle of its own pairing process (or even another pairing
process with a different box on the LAN).
(This shouldn't cause a rogue PairAck to disrupt a pairing process part
way through.)
|
|
|
|
|
|
|
|
|
| |
The msg contains a haskell-escaped string, so control characters in it can
also be escaped. So this didn't work before, really.
Got rid of the \n check, because current pairing messages actually do
contain a \n, after the ssh public key. Don't want to break
back-compatability.
|
|
|
|
|
|
|
|
|
|
| |
When starting up the assistant, it'll remind about the current
repository, if it doesn't have checks. And when a removable drive
is plugged in, it will remind if a repository on it lacks checks.
Since that might be annoying, the reminders can be turned off.
This commit was sponsored by Nedialko Andreev.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Added a RemoteChecker thread, that waits for problems to be reported with
remotes, and checks if their git repository is in need of repair.
Currently, only failures to sync with the remote cause a problem to be
reported. This seems enough, but we'll see.
Plugging in a removable drive with a repository on it that is corrupted
does automatically repair the repository, as long as the corruption causes
git push or git pull to fail. Some types of corruption do not, eg
missing/corrupt objects for blobs that git push doesn't need to look at.
So, this is not really a replacement for scheduled git repository fscking.
But it does make the assistant more robust.
This commit is sponsored by Fernando Jimenez.
|
|
|
|
|
|
|
|
| |
(eg, on removable drives)
gcrypt remotes are not yet handled.
This commit was sponsored by Sören Brunk.
|
|
|
|
| |
without losing data.
|
|
|
|
|
| |
No code changes, aside from some changes to lifting in code that turned out
to be able to run in Assistant rather than Handler.
|
|
|
|
| |
to avoid contending with the user's desktop login process.
|
| |
|
| |
|
| |
|
|
|
|
| |
in the config monitor, we want files relative to the top of the working directory
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently only implemented for local git remotes. May try to add support
to git-annex-shell for ssh remotes later. Could concevably also be
supported by some special remote, although that seems unlikely.
Cronner user this when available, and when not falls back to
fsck --fast --from remote
git annex fsck --from does not itself use this interface.
To do so, I would need to pass --fast and all other options that influence
fsck on to the git annex fsck that it runs inside the remote. And that
seems like a lot of work for a result that would be no better than
cd remote; git annex fsck
This may need to be revisited if git-annex-shell gets support, since it
may be the case that the user cannot ssh to the server to run git-annex
fsck there, but can run git-annex-shell there.
This commit was sponsored by Damien Diederen.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
I probably need to improve handling of the PleaseTerminate exception to
kill the fsck process. Also, if fsck finds bad files, something needs
to requeue downloads of them. Otherwise, this should work, but is probably
quite buggy since I have only tested the pure code over the past 2 days.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extends the index.lock handling to other git lock files. I surveyed
all lock files used by git, and found more than I expected. All are
handled the same in git; it leaves them open while doing the operation,
possibly writing the new file content to the lock file, and then closes
them when done.
The gc.pid file is excluded because it won't affect the normal operation
of the assistant, and waiting for a gc to finish on startup wouldn't be
good.
All threads except the webapp thread wait on the new startup sanity checker
thread to complete, so they won't try to do things with git that fail
due to stale lock files. The webapp thread mostly avoids doing that kind of
thing itself. A few configurators might fail on lock files, but only if the
user is explicitly trying to run them. The webapp needs to start
immediately when the user has opened it, even if there are stale lock
files.
Arranging for the threads to wait on the startup sanity checker was a bit
of a bear. Have to get all the NotificationHandles set up before the
startup sanity checker runs, or they won't see its signal. Perhaps
the NotificationBroadcaster is not the best interface to have used for
this. Oh well, it works.
This commit was sponsored by Michael Jakl
|
|
|
|
| |
it so it does not interfere with the automatic commits of changed files.
|
| |
|
|
|
|
|
|
|
| |
adding content that gets deduplicated.
Note that this turned out to remove a syscall, not add any expense.
Otherwise, I would not have done it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
scan. This prevents repeated retries to download files that are not available, or are not referenced by the current git tree.
This is motivated by a user report that the assistant was repeatedly
retrying transfers of files that had been deleted (in direct mode, so
removing the only copy).
Note that the glacier code retries failed transfers after a while to retry
downloads that have aged long enough to be available. This is ok; if we're
doing a full transfer scan we'll retry on every file that is still in the
git tree.
Also note that this makes the assistant less likely to get every file
referenced by old revs of the git tree. Not something the assistant tries
to ensure anyway, so I feel this is acceptable.
|
|
|
|
|
|
|
|
|
|
|
| |
To support this, a core.gcrypt-id is stored by git-annex inside the git
config of a local gcrypt repository, when setting it up.
That is compared with the remote's cached gcrypt-id. When different, a
drive has been changed. git-annex then looks up the remote config for
the uuid mapped from the core.gcrypt-id, and tweaks the configuration
appropriately. When there is no known config for the uuid, it will refuse to
use the remote.
|
|
|
|
|
| |
Retying every second is a bit much, especially given the current leak
https://github.com/audreyt/network-multicast/issues/4
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Requires git 1.8.4 or newer. When it's installed, a background
git check-ignore process is run, and used to efficiently check ignores
whenever a new file is added.
Thanks to Adam Spiers, for getting the necessary support into git for this.
A complication is what to do about files that are gitignored but have
been checked into git anyway. git commands assume the ignore has been
overridden in this case, and not need any more overriding to commit a
changed version.
However, for the assistant to do the same, it would have to run git ls-files
to check if the ignored file is in git. This is somewhat expensive. Or it
could use the running git-cat-file process to query the file that way,
but that requires transferring the whole file content over a pipe, so it
can be quite expensive too, for files that are not git-annex
symlinks.
Now imagine if the user knows that a file or directory tree will be getting
frequent changes, and doesn't want the assistant to sync it, so gitignores
it. The assistant could overload the system with repeated ls-files checks!
So, I've decided that the assistant will not automatically commit changes
to files that are gitignored. This is a tradeoff. Hopefully it won't be a
problem to adjust .gitignore settings to not ignore files you want the
assistant to autocommit, or to manually git annex add files that are listed
in .gitignore.
(This could be revisited if git-annex gets access to an interface to check
the content of the index w/o forking a git command. This could be libgit2,
or perhaps a separate git cat-file --batch-check process, so it wouldn't
need to ship over the whole file content.)
This commit was sponsored by Francois Marier. Thanks!
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of files at once (around 5 thousand).
This bug was introduced in 74c30fc1a6e88d926d07e12f4e7ffc7d897bf9f6,
which improved handling of adding very large numbers of files by ensuring
that a minimum number of max size commits (5000 files each) were done.
I accidentially made it wait for another change to appear after such a max
size commit, even if a lot of queued changes were already accumulated.
That resulted in a stall when it got to the end. Now fixed to not wait
any longer than necessary to ensure the watcher has had time to wake back
up after the max size commit.
This commit was sponsored by Michael Linksvayer. Thanks!
|
|
|
|
|
|
|
|
|
|
| |
in indirect mode.
This is a laziness problem. Despite the bang pattern on newfiles, the list
was not being fully evaluated before cleanup was called. Moving cleanup out
to after the list is actually used fixes this.
More evidence that I should be using ResourceT or pipes, if any was needed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
remote.<name>.annex-sync set to false.
This affected both the hourly NetWatcherFallback thread and the syncing
when network connection is detected.
It was a reversion of sorts, introduced in
8655ea7f8e853b7de4defbca2655b741362ecd21, when annex-ignore was changed to
not control git syncing. I forgot to make it check annex-sync at that
point.
|
| |
|
| |
|
|
|
|
|
|
|
| |
This is a compromise. I would like to nice every thread except for the
webapp thread, but it's not practical to do so. That would need every
thread to run as a bound thread, which could add significant overhead.
And any forkIO would escape the nice level.
|
| |
|
|
|
|
| |
when a new repo is made.
|
| |
|
|
|
|
| |
information
|