| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
assistant: Work around horrible, terrible, very bad behavior of
gnome-keyring, by not storing special-purpose ssh keys in ~/.ssh/*.pub.
Apparently gnome-keyring apparently will load and indiscriminately use such
keys in some cases, even if they are not using any of the standard ssh key
names. Instead store the keys in ~/.ssh/annex/, which gnome-keyring will
not check.
Note that neither I nor #debian-devel were able to quite reproduce this
problem, but I believe it exists, and that this fixes it. And it certianly
won't hurt anything..
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most remotes have meters in their implementations of retrieveKeyFile
already. Simply hooking these up to the transfer log makes that information
available. Easy peasy.
This is particularly valuable information for encrypted remotes, which
otherwise bypass the assistant's polling of temp files, and so don't have
good progress bars yet.
Still some work to do here (see progressbars.mdwn changes), but this
is entirely an improvement from the lack of progress bars for encrypted
downloads.
|
|
|
|
|
|
| |
* addurl: Register transfer so the webapp can see it.
* addurl: Automatically retry downloads that fail, as long as some
additional content was downloaded.
|
| |
|
|
|
|
|
|
| |
symlinks on FAT and other filesystem not supporting symlinks.
also, blog for the day..
|
|
|
|
|
|
|
| |
For backwards compatability, "" is treated as "0" sequence number.
--debug will show xmpp sequence numbers now, but they are not otherwise
used.
|
| |
|
| |
|
|
|
|
| |
VERSION_FROM_CHANGELOG.
|
| |
|
| |
|
|
|
|
|
|
|
| |
connecting to it from another.
Does not yet use HTTPS. I'd need to generate a certificate, and I'm not
sure what's the best way to do that.
|
| |
|
|
|
|
|
| |
Adjust preferred content expressions so that content in archive directories
is preferred until it has reached an archive or smallarchive repository.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixed by storing a list of cached inodes for a key, instead of just one.
Backwards compatability note: An old git-annex version will fail to parse
an inode cache file that has been written by a new version, and has
multiple items. It will succees if just one. So old git-annexes will have
even worse behavior when there are duplicated files, if that is possible.
I don't think it will be a problem. (Famous last words.)
Also, note that it doesn't expire old and unused inode caches for a key.
It would be possible to add this if needed; just look through the
associated files for a key and if there are more cached inodes, throw out
any not corresponding to associated files. Unless a file is being copied
repeatedly and the old copy deleted, this lack of expiry should not be a
problem.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
same as is already done for bare repositories.
* since this is a crippled filesystem anyway, git-annex doesn't use
symlinks on it
* so there's no reason to use the mixed case hash directories that we're
stuck using to avoid breaking everyone's symlinks to the content
* so we can do what is already done for all bare repos, and make non-bare
repos on crippled filesystems use the all-lower case hash directories
* which are, happily, all 3 letters long, so they cannot conflict with
mixed case hash directories
* so I was able to 100% fix this and even resuming `git annex add` in the
test case will recover and it will all just work.
|
|
|
|
| |
connection caching.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids commit churn by the assistant when eg,
replacing a file with a symlink.
But, just as importantly, it prevents the working tree being left with a
deleted file if git-annex, or perhaps the whole system, crashes at the
wrong time.
(It also probably avoids confusing displays in file managers.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
My test case for this bug is to have the assistant running and syncing to
a remote, and create a file in the annex. Then at the command line run
git annex drop. The assistant sees that the file is gone, sees it's a wanted
file, and downloads it from the remote.
With a directory special remote and a small file, I was seeing around 1
time in 3, a race where the file got unstaged from git after it got
downloaded.
Looking at what direct mode content managing code does in this case, it
deletes the symlink, and then adds the file content back. It would be
possible, sometimes, to avoid removing the symlink and do this atomically.
And I probably should.. but in some cases, particularly where the file
needs to be run through `cp` (multiple direct mode files with same
content), there's no way to atomically replace the symlink with the
content.
Anyway, the bug turns out to be something that the watcher does right for
indirect mode, but not for direct mode. When it got an add event, it
checked to see if this was a new file, or one we've already added. In the
latter case, no add event was queued. But that means that only the rm event
is queued, and so it unstages the file.
Fixed by queueing an add event even when the file is already in git.
Tested by running hundreds of drops in a loop; file remained staged.
|
|
|
|
| |
This is a corresponding bug to the one I fixed yesterday in the assistant.
|
| |
|
|
|
|
| |
repositories want content that is only located on untrusted or dead repositories.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and the assistant add to the annex.
I would have sort of liked to put this in .gitattributes, but it seems
it does not support multi-word attribute values. Also, making this a single
config setting makes it easy to only parse the expression once.
A natural next step would be to make the assistant `git add` files that
are not annex.largefiles. OTOH, I don't think `git annex add` should
`git add` such files, because git-annex command line tools are
not in the business of wrapping git command line tools.
|
|
|
|
| |
is a monstrosity that should have died 25 years ago, such as csh.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was confusion in different parts of the progress bar code about
whether an update contained the total number of bytes transferred, or the
number of bytes transferred since the last update. One way this bug
showed up was progress bars that seemed to stick at zero for a long time.
In order to fix it comprehensively, I add a new BytesProcessed data type,
that is explicitly a total quantity of bytes, not a delta.
Note that this doesn't necessarily fix every problem with progress bars.
Particularly, buffering can now cause progress bars to seem to run ahead
of transfers, reaching 100% when data is still being uploaded.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
missed if they occurred while a page was loading.
When a page is loaded, the javascript requests an notification url, and
does long polling on the url to be informed of changes. But if a change
occured before the notification url was requested, it would not be notified
of that change, and so the page display would not update.
I fixed this by *always* updating the page display after it gets
the notification url. This is extra work, but the overhead is not noticable
in the other overhead of loading a page.
(A nicer way would be to somehow record the version of a page initially
loaded, and then compare it with the current version when getting the
notification url, and only force an update if it's changed. But getting
the "version" of the different parts of the page that use long polling
is difficult.)
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Move all the binaries and libraries under a bundle/ subdirectory;
so when it's in PATH only git-annex, runshell, and git-annex-webapp
will be available.
|
| |
|
|
|
|
| |
already in progress, rather than failing with no indication why.
|
|
|
|
| |
changes, now fixed.
|
| |
|
|
|
|
| |
syncing with a remote fails.
|
|
|
|
| |
repository when in indirect mode.
|
|
|
|
| |
network, or connecting a drive.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A long time ago I made Remote be an instance of the Ord typeclass, with an
implementation that compared the costs of Remotes. That seemed like a good
idea at the time, as it saved typing.. But at the time I was still making
custom Read and Show instances too. I've since learned that this is *not* a
good idea, and neither is making custom Ord instances, without deep thought
about the possible sets of values in a type. Haskell typeclasses are not a
toy.
This Ord instance came around and bit me when I put Remotes into a Set,
because now remotes with the same cost appeared to be in the Set even if
they were not. Also affected putting Remotes into a Map.
Rarely does a bug go this deep. I've fixed it comprehensively, first
removing the Ord instance entirely, and fixing the places that wanted to
order remotes by cost to do it explicitly. Then adding back an Ord instance
that is much more sane. Also by checking the rest of the Ord instances in
the code base (which were all ok).
While doing that, I found lots of places that kept remotes in Maps and
Sets. All of it was probably subtly broken in one way or another before
this fix, but it would be hard to say exactly how the bugs would
manifest.
|
| |
|