| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Known problems:
1. Tries to tahoe start when daemon is already running.
2. If multiple tahoe remotes are set up on the same computer,
they will have the same node.url configured by default,
and this confuses tahoe commands.
This commit was sponsored by LeastAuthority.com
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
connection change
So that remotes that use a persistent network connection are restarted.
A remote might keep open a long duration network connection, and could
fail to deal well with losing the connection. This is particularly a
concern now that we have external special reotes. An external
special remote that is implemented naively might open the connection only
when PREPARE is sent, and if it loses connection, throw errors on each
request that is made.
(Note that the ssh connection caching should not have this problem; if the
long-duration ssh process loses connection, the named pipe is disconnected
and the next ssh attempt will reconnect. Also, XMPP already deals with
disconnection robustly in its own way.)
There's no way for git-annex to know if a lost network connection actually
affects a given remote, which might have a transfer in process. It does not
make sense to force kill the transferkeys process every time the NetWatcher
detects a change. (Especially because the NetWatcher sometimes polls 1
change per hour.)
In any case, the NetWatcher only detects connection to a network, not
disconnection. So if a transfer is in progress over the network, and the
network goes down, that will need to time out on its own.
An alternate approch that was considered is to use a separate transferkeys
process for each remote, and detect when a request fails, and assume that
means that process is in a failing state and restart it. The problem with
that approach is that if a resource is not available and a remote fails
every time, it degrades to starting a new transferkeys process for every
file transfer, which is too expensive.
Instead, this commit only handles the network reconnection case, and restarts
transferkeys only once the network has reconnected and another transfer needs
to be made. So, a transferkeys process will be reused for 1 hour, or until the
next network connection.
----
The NotificationBroadcaster was rewritten to use TMVars rather than MSampleVars,
to allow checking without blocking if a notification has been received.
----
This commit was sponsored by Tobias Brunner.
|
|
|
|
| |
large number of files.
|
|
|
|
|
|
|
|
| |
Noticed that it was possible for add to move a file to .git/annex/objects
and not make the link if the disk was full. This happened because the
location log update failed, and so addLink never got a chance to run.
Running addLink first fixes it; on error it will unwind by moving the file
back to where it was originally.
|
|
|
|
|
|
|
|
| |
can be checked.
This adds a http HEAD before the download is done. That was already the
case when the assistant was running, and it seems worth it to avoid filling
up the whole disk, like happened to my server today.
|
|
|
|
| |
world writable when writing to those files, as that can make ssh refuse to use them, if it allows another user to write to them.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows a remote to store a piece of arbitrary state associated with a
key. This is needed to support Tahoe, where the file-cap is calculated from
the data stored in it, and used to retrieve a key later. Glacier also would
be much improved by using this.
GETSTATE and SETSTATE are added to the external special remote protocol.
Note that the state is left as-is even when a key is removed from a remote.
It's up to the remote to decide when it wants to clear the state.
The remote state log, $KEY.log.rmt, is a UUID-based log. However,
rather than using the old UUID-based log format, I created a new variant
of that format. The new varient is more space efficient (since it lacks the
"timestamp=" hack, and easier to parse (and the parser doesn't mess with
whitespace in the value), and avoids compatability cruft in the old one.
This seemed worth cleaning up for these new files, since there could be a
lot of them, while before UUID-based logs were only used for a few log
files at the top of the git-annex branch. The transition code has also
been updated to handle these new UUID-based logs.
This commit was sponsored by Daniel Hofer.
|
| |
|
| |
|
|
|
|
| |
See http://git-annex.branchable.com/forum/How_to_resolve_a_interrupted_merge_in_direct_mode__63__/
|
|
|
|
| |
keeps crashing when restarted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was unexpectedly difficult because of a depdenency cycle. To parse a
preferred content expression involves several things that need to operate
on the list of remotes. Which needs Remote.External. The only way to avoid
this cycle (I tried breaking it at several points) was to skip parsing the
expression in SETWANTED.
That's sorta ok, because git-annex already has to deal with unparsable
preferred content expressions being stored, in order to handle eg,
upgrades. But I'm still not very happy that I cannot check it.
I feel this is a strong indication that I need to beware of further
bloating the special remote protocol interface.
|
| |
|
|
|
|
| |
this is used for password prompting.
|
|
|
|
| |
subdirectories to go missing.
|
| |
|
| |
|
| |
|
|
|
|
| |
specifying a command to run. Fixes transferring files to/from android and probably a few other things.
|
| |
|
|
|
|
| |
problem when a direct mode repo was somehow set to v3 rather than v4, and so the automatic direct mode upgrade to v5 was not done.
|
| |
|
| |
|
| |
|
|
|
|
| |
standalone build, and OSX app, so they will be available when it's added to PATH.
|
| |
|
| |
|
|
|
|
| |
stuck in incoming.)
|
| |
|
| |
|
|
|
|
| |
it to the configuration for any ssh remotes previously created by the assistant. This avoids repeated prompts by ssh if the host key changes, instead syncing with such a remote will fail. Closes: #732602
|
|
|
|
| |
follow symlinks to other directories, and add their contents to the annex.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Batch detection is heuristic, so can sometimes fail. I observed one such
failure while starting up in a repository with 87000 files. After the first
several batches of ~5000 files, it fell out of batch mode, and never
re-entered it, and so made many more commits of a few files at a time
than necessary.
So, let's always use batch mode when in the startup scan. This avoids the
heuristic there, at least.
There is clearly also room to improve the heuristic. Possibly 10 files is
too high a bar to be found during a commit, on a system that can commit
quickly.
|
|
|
|
| |
use it, to remove dependence on the host glibc.
|
| |
|
|
|
|
| |
directory in PATH as an alternative installation method, and will use readlink to find where the build was unpacked.
|
|
|
|
|
| |
find --format: Added hashdirlower, hashdirmixed, keyname, and mtime format
variables.
|
| |
|
| |
|
| |
|
|
|
|
| |
pulled from a remote. A bug prevented the files from being updated in the work tree, and this caused the modification to be reverted.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Fixes a test case I received where a corrupted repo was repaired, but the
git-annex branch was not. The root of the problem was that the
MissingObject returned by the repair code was not necessarily a complete
set of all objects that might have been deleted during the repair.
So, stop trying to return that at all, and instead make the index file
checking code explicitly verify that each object the index uses is present.
|
| |
|