| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(Except for the actual streaming of receive-pack through XMPP, which
can only run once we've gotten an appropriate uuid in a push initiation
message.)
Pushes are now only initiated when the initiation message comes from a
known uuid. This allows multiple distinct repositories to use the same xmpp
address.
Note: This probably breaks initial push after xmpp pairing, because at that
point we may not know about the paired uuid, and so reject the push from
it. It won't break in simple cases, because the annex-uuid of the remote
is checked. However, when there are multiple clients behind a single xmpp
address, only uuid of the first is recorded in annex-uuid, and so any
pushes from the others will be rejected (unless the first remote pushes their
uuids to us beforehand.
|
|
|
|
|
|
|
|
| |
This is so git remotes on servers without git-annex installed can be used
to keep clients' git repos in sync.
This is a behavior change, but since annex-sync can be set to disable
syncing with a remote, I think it's acceptable.
|
| |
|
| |
|
| |
|
|
|
|
| |
attached
|
| |
|
| |
|
|
|
|
| |
syncing with a remote fails.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This fixes the issue mentioned in the last commit.
Turns out just collecting UUID of clients behind a XMPP remote is
insufficient (although I should probably still do it for other reasons),
because a single remote repo might be connected via both XMPP and local
pairing. So a way is needed to know when a push was received from any
client using a given XMPP remote over XMPP, as opposed to via ssh.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and on startup.
Make manualPull send push requests over XMPP.
When reconnecting with remotes, those that are XMPP remotes cannot
immediately be pulled from and scanned, so instead maintain a set of
(probably) desynced remotes, and put XMPP remotes on it. (This set could be
used in other ways later, if we can detect we're out of sync with other
types of remotes.)
The merger handles detecting when a XMPP push is received from a desynced
remote, and triggers a scan then, if they have in fact diverged.
This has one known bug: A single XMPP remote can have multiple clients
behind it. When this happens, only the UUID of one client is recorded
as the UUID of the XMPP remote. Pushes from the other XMPP clients will not
trigger a scan. If the client whose UUID is expected responds to the push
request, it'll work, but when that client is offline, we're SOL.
|
|
|
|
|
|
| |
Pass subcommand as a regular param, which allows passing git parameters
like -c before it. This was already done in the pipeing set of functions,
but not the command running set.
|
|
|
|
|
| |
This got broken when I optimised reconnecting with remotes, to not do a
full scan if the remote was not diverged.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
I decided to use the fallback push mode from the beginning for XMPP, since
while it uses some ugly branches, it avoids the possibility of a normal
push failing, and needing to pull and re-push. Due to the overhead of XMPP,
and the difficulty of building such a chain of actions due to the async
implementation, this seemed reasonable.
It seems to work great!
|
|
|
|
| |
This improves type safety.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Amoung other things, this makes it immediately sync files from a removable
drive when it's added.
|
|
|
|
| |
Several things only happen when on a branch, so make sure we're on one.
|
|
|
|
|
|
| |
Currently have three old versions of functions that more reworking is
needed to remove: getDaemonStatusOld, modifyDaemonStatusOld_, and
modifyDaemonStatusOld
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
lots of nice cleanups
|
|
|
|
|
|
|
|
|
|
| |
Converted several threads to run in the monad.
Added a lot of useful combinators for working with the monad.
Now the monad includes the name of the thread.
Some debugging messages are disabled pending converting other threads.
|
|
|
|
| |
Seems nothing else ensures the branch is committed anymore.
|
|
|
|
|
| |
Still need to do something about transfer queueing, however. This could be
a real can of worms.
|
|
|
|
|
|
|
| |
Lacking error handling, reconnection, credentials configuration,
and doesn't actually do anything when it receives an incoming notification.
Other than that, it might work! :)
|
|
|
|
|
|
|
|
|
|
|
| |
Hooked up everything that needs to notify on pushes. Note that
syncNewRemote does not notify. This is probably ok, and I'd need to thread
more state through to make it do so.
This is only set up to support a single push notification method; I didn't
use a NotificationBroadcaster. Partly because I don't yet know what info
about pushes needs to be communicated, so my data types are only
preliminary.
|
| |
|
| |
|
|
|
|
|
| |
Can't use show-ref --tags --branches, as that omits remote branches.
Instead, filter out the synced refs directly.
|
|
|
|
|
|
|
|
|
|
|
| |
Don't expose these as branches in refs/heads/. Instead hide them away in
refs/synced/ where only show-ref will find them.
Make unused only look at branches and tags, not these other things,
so it won't care if some stale sync ref used to use a file.
This means they don't need to be deleted, which could have
led to an incoming sync being missed.
|
|
|
|
|
|
|
|
|
| |
The fallback branches pushed to contain the uuid of the pusher, which is
ugly. That's why syncing doesn't normally use this method.
The merger deletes fallback branches after merging them, to contain the
ugliness, and so unused doesn't look at data from these branches.
(The fallback git-annex branch is left behind for now.)
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Avoid trying to git push/pull to special remotes, but still do transfer
scans of them, after git pull from any other remotes, so we know about
any values that have been placed on them.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The expensive transfer scan now scans a whole set of remotes in one pass.
So at startup, or when network comes up, it will run only once.
Note that this can result in transfers from/to higher cost remotes being
queued before other transfers of other content from/to lower cost remotes.
Before, low cost remotes were scanned first and all their transfers came
first. When multiple transfers are queued for a key, the lower cost ones
are still queued first. However, this could result in transfers from slow
remotes running for a long time while transfers of other data from faster
remotes waits.
I expect to make the transfer queue smarter about ordering
and/or make it allow multiple transfers at a time, which should eliminate
this annoyance. (Also, it was already possible to get into that situation,
for example if the network was up, lots of transfers from slow remotes
might be queued, and then a disk is mounted and its faster transfers have
to wait.)
Also note that this means I don't need to improve the code in
Assistant.Sync that currently checks if any of the reconnected remotes
have diverged, and if so, queues scans of all of them. That had been very
innefficient, but now doesn't matter.
|
|
|
|
| |
Or when a remote first becomes available after startup.
|
|
|
|
| |
of a remote
|
| |
|