| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
| |
This is a compromise. I would like to nice every thread except for the
webapp thread, but it's not practical to do so. That would need every
thread to run as a bound thread, which could add significant overhead.
And any forkIO would escape the nice level.
|
| |
|
|
|
|
| |
when a new repo is made.
|
| |
|
|
|
|
| |
information
|
| |
|
|
|
|
|
|
| |
I noticed that when my modem hung up and redialed, my xmpp client was left
sending messages into the void. This will also handle any idle
disconnection issues.
|
|
|
|
|
|
|
|
|
| |
I hope this will be easier to reason about, and less buggy. It was
certianly easier to write!
An immediate benefit is that with a traversable queue of push requests to
select from, the threads can be a lot fairer about choosing which client to
service next.
|
|
|
|
|
|
|
|
|
|
|
| |
This will avoid losing any messages received from 1 client when a push
involving another client is running.
Additionally, the handling of push initiation is improved,
it's no longer allowed to run multiples of the same type of push to
the same client.
Still stalls sometimes :(
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Observed: With 2 xmpp clients, one would sometimes stop responding
to CanPush messages. Often it was in the middle of a receive-pack
of its own (or was waiting for a failed one to time out).
Now these are always immediately responded to, which is fine; the point
of CanPush is to find out if there's another client out there that's
interested in our push.
Also, in queueNetPushMessage, queue push initiation messages when
we're already running the side of the push they would initiate.
Before, these messages were sent into the netMessagesPush channel,
which was wrong. The xmpp send-pack and receive-pack code discarded
such messages.
This still doesn't make XMPP push 100% robust. In testing, I am seeing
it sometimes try to run two send-packs, or two receive-packs at once
to the same client (probably because the client sent two requests).
Also, I'm seeing rather a lot of cases where it stalls out until it
runs into the 120 second timeout and cancels a push.
And finally, there seems to be a bug in runPush. I have logs that
show it running its setup action, but never its cleanup action.
How is this possible given its use of E.bracket? Either some exception
is finding its way through, or the action somehow stalls forever.
When this happens, one of the 2 clients stops syncing.
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a bug with git annex add in direct mode. If some files already
existed in the tree pointing at the same key as a file that was just added,
and their content was not present, add neglected to copy the content to
those files.
I also changed the behavior of moveAnnex slightly: When content is moved
into the annex in direct mode, it does not overwrite any content already
present in direct mode files. That content may be modified after all.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Unless the request is for repo uuid we already know. This way, if A1 pairs
with friend B1, and B1 pairs with device B2, then B1 can request A1 pair
with it and no confirmation is needed. (In future, may want to try to do
that automatically, to make a more robust network.)
|
|
|
|
|
|
|
|
|
|
| |
the local tree.
Observed that the pushed refs were received, but not merged into master.
The merger never saw an add event for these refs. Either git is not writing
to a new file and renaming it into place, or the inotify code didn't notice
that. Changed it to also watch for modify events and that seems to have
fixed it!
|
| |
|
|
|
|
|
|
|
| |
* Add public repository group.
* webapp: Can now set up Internet Archive repositories.
TODO: Enabling IA repositories.
|
| |
|
|
|
|
|
|
| |
Without this, a very large batch add has commits of sizes approx
5000, 2500, 1250, etc down to 10, and then starts over at 5000.
This fixes it so it's 5000+ every time.
|
|
|
|
|
| |
A recent change made existing symlinks be re-staged. That does not need to
be done during the startup scan though.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
That hook updates associated file bookkeeping info for direct mode.
But, everything already called addAssociatedFile when adding/changing a
file. It only needed to also call removeAssociatedFile when deleting a file,
or a directory.
This should make bulk adds faster, by some possibly significant amount.
Bulk removals may be a little slower, since it has to use catKeyFile now
on each removed file, but will still be faster than adds.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
There's a tradeoff between making less frequent commits, and
needing to use memory to store all the changes that are coming
in. At 10 thousand, it needs 150 mb of memory. 5 thousand drops
that down to 90 mb or so.
This also turns out to have significant imact on total run time.
I benchmarked 10k changes taking 27 minutes. But two 5k batches
took only 21 minutes.
|
|
|
|
|
|
|
| |
If an add failed, we should lose the KeySource, since it, presumably,
differs due to a change that was made to the file.
(The locked down file is already deleted.)
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turns out that a lot of the time spent in a bulk add was just updating the
add alert to rotate through each file that was added. Showing one alert
makes for a significant speedup.
Also, when the webapp is open, this makes it take quite a lot less cpu
during bulk adds.
Also, it lets the user know when a bulk add happened, which is sorta
nice..
|
|
|
|
| |
See analysis in bug report for one way this could happen.
|
| |
|
|
|
|
|
|
|
|
| |
This is so git remotes on servers without git-annex installed can be used
to keep clients' git repos in sync.
This is a behavior change, but since annex-sync can be set to disable
syncing with a remote, I think it's acceptable.
|
| |
|
|
|
|
|
|
| |
--alow-empty -m ""` run an editor.
See http://git-annex.branchable.com/bugs/assistant_hangs_during_commit/
|
|
|
|
|
|
| |
Incidentially should work around the last problem that prevented the webapp
building on Android. (Except for a few places I need to clean up after
hand-fixing the spliced TH code.)
|
| |
|
|
|
|
|
|
| |
* addurl: Register transfer so the webapp can see it.
* addurl: Automatically retry downloads that fail, as long as some
additional content was downloaded.
|
|
|
|
|
|
| |
symlinks on FAT and other filesystem not supporting symlinks.
also, blog for the day..
|
|
|
|
|
|
|
| |
connecting to it from another.
Does not yet use HTTPS. I'd need to generate a certificate, and I'm not
sure what's the best way to do that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed by storing a list of cached inodes for a key, instead of just one.
Backwards compatability note: An old git-annex version will fail to parse
an inode cache file that has been written by a new version, and has
multiple items. It will succees if just one. So old git-annexes will have
even worse behavior when there are duplicated files, if that is possible.
I don't think it will be a problem. (Famous last words.)
Also, note that it doesn't expire old and unused inode caches for a key.
It would be possible to add this if needed; just look through the
associated files for a key and if there are more cached inodes, throw out
any not corresponding to associated files. Unless a file is being copied
repeatedly and the old copy deleted, this lack of expiry should not be a
problem.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
same as is already done for bare repositories.
* since this is a crippled filesystem anyway, git-annex doesn't use
symlinks on it
* so there's no reason to use the mixed case hash directories that we're
stuck using to avoid breaking everyone's symlinks to the content
* so we can do what is already done for all bare repos, and make non-bare
repos on crippled filesystems use the all-lower case hash directories
* which are, happily, all 3 letters long, so they cannot conflict with
mixed case hash directories
* so I was able to 100% fix this and even resuming `git annex add` in the
test case will recover and it will all just work.
|
| |
|
|
|
|
| |
deleting it
|
| |
|
|
|
|
| |
Needs fixes to build when the webapp is disabled.
|
| |
|
|
|
|
| |
This just prettifies some display.
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids commit churn by the assistant when eg,
replacing a file with a symlink.
But, just as importantly, it prevents the working tree being left with a
deleted file if git-annex, or perhaps the whole system, crashes at the
wrong time.
(It also probably avoids confusing displays in file managers.)
|
|
|
|
|
| |
Rather than re-adding a direct mode file unnecessarily when it's not
changed, just re-stage the symlink.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
My test case for this bug is to have the assistant running and syncing to
a remote, and create a file in the annex. Then at the command line run
git annex drop. The assistant sees that the file is gone, sees it's a wanted
file, and downloads it from the remote.
With a directory special remote and a small file, I was seeing around 1
time in 3, a race where the file got unstaged from git after it got
downloaded.
Looking at what direct mode content managing code does in this case, it
deletes the symlink, and then adds the file content back. It would be
possible, sometimes, to avoid removing the symlink and do this atomically.
And I probably should.. but in some cases, particularly where the file
needs to be run through `cp` (multiple direct mode files with same
content), there's no way to atomically replace the symlink with the
content.
Anyway, the bug turns out to be something that the watcher does right for
indirect mode, but not for direct mode. When it got an add event, it
checked to see if this was a new file, or one we've already added. In the
latter case, no add event was queued. But that means that only the rm event
is queued, and so it unstages the file.
Fixed by queueing an add event even when the file is already in git.
Tested by running hundreds of drops in a loop; file remained staged.
|
| |
|