| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
assistant: Work around horrible, terrible, very bad behavior of
gnome-keyring, by not storing special-purpose ssh keys in ~/.ssh/*.pub.
Apparently gnome-keyring apparently will load and indiscriminately use such
keys in some cases, even if they are not using any of the standard ssh key
names. Instead store the keys in ~/.ssh/annex/, which gnome-keyring will
not check.
Note that neither I nor #debian-devel were able to quite reproduce this
problem, but I believe it exists, and that this fixes it. And it certianly
won't hurt anything..
|
|
|
|
|
|
|
| |
This will make it easier to use the Evil Splicer, when it needs to add
package qualified imports
And there's no real downside.
|
| |
|
| |
|
|
|
|
|
|
| |
* addurl: Register transfer so the webapp can see it.
* addurl: Automatically retry downloads that fail, as long as some
additional content was downloaded.
|
|
|
|
|
|
| |
symlinks on FAT and other filesystem not supporting symlinks.
also, blog for the day..
|
|
|
|
|
|
|
| |
For backwards compatability, "" is treated as "0" sequence number.
--debug will show xmpp sequence numbers now, but they are not otherwise
used.
|
|
|
|
| |
and remove any activity alerts
|
| |
|
| |
|
|
|
|
|
|
|
| |
connecting to it from another.
Does not yet use HTTPS. I'd need to generate a certificate, and I'm not
sure what's the best way to do that.
|
|
|
|
|
|
|
|
|
|
| |
Unless highRandomQuality=false (or --fast) is set, use Libgcypt's
'GCRY_VERY_STRONG_RANDOM' level by default for cipher generation, like
it's done for OpenPGP key generation.
On the assistant side, the random quality is left to the old (lower)
level, in order not to scare the user with an enless page load due to
the blocking PRNG waiting for IO actions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed by storing a list of cached inodes for a key, instead of just one.
Backwards compatability note: An old git-annex version will fail to parse
an inode cache file that has been written by a new version, and has
multiple items. It will succees if just one. So old git-annexes will have
even worse behavior when there are duplicated files, if that is possible.
I don't think it will be a problem. (Famous last words.)
Also, note that it doesn't expire old and unused inode caches for a key.
It would be possible to add this if needed; just look through the
associated files for a key and if there are more cached inodes, throw out
any not corresponding to associated files. Unless a file is being copied
repeatedly and the old copy deleted, this lack of expiry should not be a
problem.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
same as is already done for bare repositories.
* since this is a crippled filesystem anyway, git-annex doesn't use
symlinks on it
* so there's no reason to use the mixed case hash directories that we're
stuck using to avoid breaking everyone's symlinks to the content
* so we can do what is already done for all bare repos, and make non-bare
repos on crippled filesystems use the all-lower case hash directories
* which are, happily, all 3 letters long, so they cannot conflict with
mixed case hash directories
* so I was able to 100% fix this and even resuming `git annex add` in the
test case will recover and it will all just work.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
deleting it
|
| |
|
|
|
|
| |
Needs fixes to build when the webapp is disabled.
|
| |
|
| |
|
|
|
|
| |
This just prettifies some display.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids commit churn by the assistant when eg,
replacing a file with a symlink.
But, just as importantly, it prevents the working tree being left with a
deleted file if git-annex, or perhaps the whole system, crashes at the
wrong time.
(It also probably avoids confusing displays in file managers.)
|
|
|
|
|
| |
Rather than re-adding a direct mode file unnecessarily when it's not
changed, just re-stage the symlink.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
My test case for this bug is to have the assistant running and syncing to
a remote, and create a file in the annex. Then at the command line run
git annex drop. The assistant sees that the file is gone, sees it's a wanted
file, and downloads it from the remote.
With a directory special remote and a small file, I was seeing around 1
time in 3, a race where the file got unstaged from git after it got
downloaded.
Looking at what direct mode content managing code does in this case, it
deletes the symlink, and then adds the file content back. It would be
possible, sometimes, to avoid removing the symlink and do this atomically.
And I probably should.. but in some cases, particularly where the file
needs to be run through `cp` (multiple direct mode files with same
content), there's no way to atomically replace the symlink with the
content.
Anyway, the bug turns out to be something that the watcher does right for
indirect mode, but not for direct mode. When it got an add event, it
checked to see if this was a new file, or one we've already added. In the
latter case, no add event was queued. But that means that only the rm event
is queued, and so it unstages the file.
Fixed by queueing an add event even when the file is already in git.
Tested by running hundreds of drops in a loop; file remained staged.
|
|
|
|
|
| |
An off-by-one prevented the assistant ever dropping files from untrusted
repositories.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and the assistant add to the annex.
I would have sort of liked to put this in .gitattributes, but it seems
it does not support multi-word attribute values. Also, making this a single
config setting makes it easy to only parse the expression once.
A natural next step would be to make the assistant `git add` files that
are not annex.largefiles. OTOH, I don't think `git annex add` should
`git add` such files, because git-annex command line tools are
not in the business of wrapping git command line tools.
|
| |
|
|
|
|
|
| |
When another process is running a transfer, if we try to run a dup it'll
fail, and we should not remove the transfer from the transfer display.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
missed if they occurred while a page was loading.
When a page is loaded, the javascript requests an notification url, and
does long polling on the url to be informed of changes. But if a change
occured before the notification url was requested, it would not be notified
of that change, and so the page display would not update.
I fixed this by *always* updating the page display after it gets
the notification url. This is extra work, but the overhead is not noticable
in the other overhead of loading a page.
(A nicer way would be to somehow record the version of a page initially
loaded, and then compare it with the current version when getting the
notification url, and only force an update if it's changed. But getting
the "version" of the different parts of the page that use long polling
is difficult.)
|
|
|
|
| |
attached
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Needed to send a trailing NUL to end a request, and set the read handle
non-blocking.
Also, set fileSystemEncoding on all handles, since there's a filename in
there.
|
| |
|
|
|
|
|
| |
This way it's only visible when transfers are not running, which is about
what a user would expect.
|
|
|
|
|
|
|
| |
Like the old one, but does not mention which remotes are scanned.
I think this is less confusing, as it does not imply the remotes
were somehow accessed (which they are not; inaccessible remotes
can be scanned.)
|
|
|
|
|
|
|
|
|
|
| |
If transferkey crashes or even fails to run, the TransferWatcher will not
see the transfer info file be created, so will not remove the transfer
from the list of active transfers. This causes the list to grow
continually, and all active transfers are displayed in the webapp. So, put
in a guard.
I assume that transferkey will not exit 0 while neglecting to clean up.
|
| |
|
|
|
|
|
|
| |
Rather than forking a git-annex transferkey only to have it fail,
just immediately record the failed transfer (so when the drive is plugged
in, the scan will retry it).
|
| |
|