| Commit message (Collapse) | Author | Age |
|
|
|
|
|
| |
It's possible for there to be multiple queued changes all adding the same
file, and for those changes to be reordered. Maybe. This check will guard
against that ending up adding the wrong version of the file last.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rethought how to keep track of pending adds that need to be retried later.
The commit thread already run up every second when there are changes,
so let's keep pending adds queued as changes until they're safe to add.
Also, the committer is now smarter about avoiding empty commits when
all the adds are currently unsafe, or in the rare case that an add event
for a symlink is not received in time. It may avoid them entirely.
This seems to work as before for inotify, and is untested for kqueue.
(Actually commit batching seems to be improved for inotify, although I'm
not sure why. I'm seeing only two commits made during large batch
operations, and the first of those is the non-batch mode commit.)
|
|
|
|
|
|
|
|
| |
Kqueue needs to remember which files failed to be added due to being open,
and retry them. This commit gets the data in place for such a retry thread.
Broke KeySource out into its own file, and added Eq and Ord instances
so it can be stored in a Set.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Kqueue code for dispatching events is not tested and probably doesn't
build.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Move lsof check, and display a message before daemon startup if on an
unsupported OS.
|
|
|
|
|
| |
I've tested both cases where this is necessary, and it works great!
A file with multiple writers is not added until the last one closes.
|
|
|
|
| |
When there are duplicate add events for the same file, only add it once.
|
|
|
|
|
|
|
|
| |
There is indeed a race waiting for LinkChanges:
1. file annexed, link made
2. link deleted
3. inotify event for link creation runs, but as link is gone, handler is not run
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Defer adding files to the annex until commit time, when during a batch
operation, a bundle of files will be available. This will allow for
checking a them all with a single lsof call.
The tricky part is that adding the file causes a symlink change inotify.
So I made it wait for an appropriate number of symlink changes to be
received before continuing with the commit. This avoids any delay
in the commit process. It is possible that some unrelated symlink change is
made; if that happens it'll commit it and delay committing the newly added
symlink for 1 second. This seems ok. I do rely on the expected symlink
change event always being received, but only when the add succeeds.
Another way to do it might be to directly stage the symlink, and then
ignore the redundant symlink change event. That would involve some
redundant work, and perhaps an empty commit, but if this code turns
out to have some bug, that'd be the best way to avoid it.
FWIW, this change seems to, as a bonus, have produced better grouping
of batch changes into single commits. Before, a large batch change would
result in a series of commits, with the first containing only one file,
and each of the rest bundling a number of files. Now, the added wait for
the symlink changes to arrive gives time for additional add changes to
be processed, all within the same commit.
|
|
|
|
|
|
|
|
|
| |
A few places catch IO errors after calling runThreadState,
but since the MVar was not restored, it'd later deadlock trying to read
from it.
I'd like to catch all exceptions here, but I could not get the types
to unify.
|
| |
|
|
|
|
| |
Currently wakes up once a day, and does nothing. :)
|
| |
|
| |
|
|
|
|
|
| |
Also afterLastDaemonRun, with 10 minute slop to handle majority of clock
skew issues.
|
|
|