| Commit message (Collapse) | Author | Age |
|
|
|
|
| |
I think I've been looking for that function for some time.
Ie, I remember wanting to collapse Just Nothing to Nothing.
|
| |
|
|
|
|
|
|
| |
Introduced a new per-remote option 'annex-rsync-transport' to specify
the remote shell that it to be used with rsync. In case the value is
'ssh', connections are cached unless 'sshcaching' is unset.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most remotes have meters in their implementations of retrieveKeyFile
already. Simply hooking these up to the transfer log makes that information
available. Easy peasy.
This is particularly valuable information for encrypted remotes, which
otherwise bypass the assistant's polling of temp files, and so don't have
good progress bars yet.
Still some work to do here (see progressbars.mdwn changes), but this
is entirely an improvement from the lack of progress bars for encrypted
downloads.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Unless highRandomQuality=false (or --fast) is set, use Libgcypt's
'GCRY_VERY_STRONG_RANDOM' level by default for cipher generation, like
it's done for OpenPGP key generation.
On the assistant side, the random quality is left to the old (lower)
level, in order not to scare the user with an enless page load due to
the blocking PRNG waiting for IO actions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was confusion in different parts of the progress bar code about
whether an update contained the total number of bytes transferred, or the
number of bytes transferred since the last update. One way this bug
showed up was progress bars that seemed to stick at zero for a long time.
In order to fix it comprehensively, I add a new BytesProcessed data type,
that is explicitly a total quantity of bytes, not a delta.
Note that this doesn't necessarily fix every problem with progress bars.
Particularly, buffering can now cause progress bars to seem to run ahead
of transfers, reaching 100% when data is still being uploaded.
|
|
|
|
|
| |
Added a function to insert a new cost into a list, which could be used to
asjust costs after a drag and drop.
|
|
|
|
|
|
| |
Pass subcommand as a regular param, which allows passing git parameters
like -c before it. This was already done in the pipeing set of functions,
but not the command running set.
|
| |
|
|
|
|
|
| |
Still a couple of places that use git config ad-hoc, but this is most of it
done.
|
|
|
|
| |
currently-supported AWS regions.
|
| |
|
| |
|
|
|
|
| |
to the repository, by setting embedcreds=yes|no when running initremote.
|
|
|
|
|
|
|
|
|
| |
Files are now written to a tmp directory in the remote, and once all
chunks are written, etc, it's moved into the final place atomically.
For now, checkpresent still checks every single chunk of a file, because
the old method could leave partially transferred files with some chunks
present and others not.
|
| |
|
| |
|
|
|
|
|
|
| |
Both the directory and webdav special remotes used to have to buffer
the whole file contents before it could be decrypted, as they read
from chunks. Now the chunks are streamed through gpg with no buffering.
|
|
|
|
|
| |
Note that receiving encrypted chunked content currently involves buffering.
(So does doing so with the directory special remote.)
|
|
|
|
|
| |
However, directory still uses its optimzed chunked file writer, as it uses
less memory than the generic one in the helper.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Transfer info files are updated when the callback is called, updating
the number of bytes transferred.
Left unused p variables at every place the callback should be used.
Which is rather a lot..
|
|
|
|
|
|
|
|
| |
This *almost* works.
Along the way, I noticed that the --uuid parameter was being accidentially
passed after the --, so that has never been actually used by
git-annex-shell to verify it's running in the expected repository. Oops. Fixed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to record a semi-useful filename associated with the key,
this required plumbing the filename all the way through to the remotes'
storeKey and retrieveKeyFile.
Note that there is potential for deadlock here, narrowly avoided.
Suppose the repos are A and B. A sends file foo to B, and at the same
time, B gets file foo from A. So, A locks its upload transfer info file,
and then locks B's download transfer info file. At the same time,
B is taking the two locks in the opposite order. This is only not a
deadlock because the lock code does not wait, and aborts. So one of A or
B's transfers will be aborted and the other transfer will continue.
Whew!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Baked into the code was an assumption that a repository's git directory
could be determined by adding ".git" to its work tree (or nothing for bare
repos). That fails when core.worktree, or GIT_DIR and GIT_WORK_TREE are
used to separate the two.
This was attacked at the type level, by storing the gitdir and worktree
separately, so Nothing for the worktree means a bare repo.
A complication arose because we don't learn where a repository is bare
until its configuration is read. So another Location type handles
repositories that have not had their config read yet. I am not entirely
happy with this being a Location type, rather than representing them
entirely separate from the Git type. The new code is not worse than the
old, but better types could enforce more safety.
Added support for core.worktree. Overriding it with -c isn't supported
because it's not really clear what to do if a git repo's config is read, is
not bare, and is then overridden to bare. What is the right git directory
in this case? I will worry about this if/when someone has a use case for
overriding core.worktree with -c. (See Git.Config.updateLocation)
Also removed and renamed some functions like gitDir and workTree that
misused git's terminology.
One minor regression is known: git annex add in a bare repository does not
print a nice error message, but runs git ls-files in a way that fails
earlier with a less nice error message. This is because before --work-tree
was always passed to git commands, even in a bare repo, while now it's not.
|
| |
|
|
|
|
|
|
| |
This option avoids gpg key distribution, at the expense of flexability, and
with the requirement that all clones of the git repository be equally
trusted.
|
| |
|
|
|
|
| |
void :: Functor f => f a -> f () -- ah, of course that's useful :)
|
|
|
|
| |
Lock files, directories, etc.
|
|
|
|
|
|
|
|
|
|
| |
getConfig got a remote-specific config, and this confusing name caused it
to be used a couple of places that only were interested in global configs.
Rename to getRemoteConfig and make getConfig only get global configs.
There are no behavior changes here, but remote.<name>.annex-web-options
never actually worked (and per-remote web options is a very unlikely to be
useful case so I didn't make it work), so fix the documentation for it.
|
| |
|
|
|
|
|
|
| |
Locking is used, so that, if there are multiple git-annex processes
using a remote concurrently, the stop hook is only run by the last
process that uses it.
|
|
|
|
|
| |
Allows showing progress bar for this last case of the directory special
remote.
|
|
|
|
| |
So the user can override them.
|
|
|
|
|
|
|
|
|
|
|
| |
Ssh connection caching is now enabled automatically by git-annex. Only one
ssh connection is made to each host per git-annex run, which can speed some
things up a lot, as well as avoiding repeated password prompts. Concurrent
git-annex processes also share ssh connections. Cached ssh connections are
shut down when git-annex exits.
Note: The rsync special remote does not yet participate in the ssh
connection caching.
|
|
|
|
|
|
|
| |
Avoids expensive file transfers, at the expense of checking file size
and/or contents.
Required some reworking of the remote code.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Constructors and configuration make sense in separate modules.
A separate Git.Types is needed to avoid cycles.
|
|
|
|
| |
multiple different encrypted special remotes.
|
| |
|
| |
|