| Commit message (Collapse) | Author | Age |
|
|
|
| |
work. Closes: #763057
|
| |
|
|
|
|
| |
debian/rules.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
See 2fb7ad68637cc4e1092f835055a974f141808ca0 for backstory about how a repo
could be in this state.
When decryption fails, the repo must be using non-encrypted creds. Note
that creds are encrypted/decrypted using the encryption cipher which is
stored in the repo, so the decryption cannot fail due to missing gpg keys
etc. (For !shared encryptiom, the cipher is iteself encrypted using some
gpg key(s), and the decryption of the cipher happens earlier, so not
affected by this change.
Print a warning message for !shared repos, and continue on using the
cipher. Wrote a page explaining what users hit by this bug should do.
This commit was sponsored by Samuel Tardieu.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
remote's key.
encryptionSetup must be called before setRemoteCredPair. Otherwise,
the RemoteConfig doesn't have the cipher in it, and so no cipher is used to
encrypt the embedded creds.
This is a security fix for non-shared encryption methods!
For encryption=shared, there's no security problem, just an
inconsistentency in whether the embedded creds are encrypted.
This is very important to get right, so used some types to help ensure that
setRemoteCredPair is only run after encryptionSetup. Note that the external
special remote bypasses the type safety, since creds can be set after the
initial remote config, if the external special remote program requests it.
Also note that IA remotes never use encryption, so encryptionSetup is not
run for them at all, and again the type safety is bypassed.
This leaves two open questions:
1. What to do about S3 and glacier remotes that were set up
using encryption=pubkey/hybrid with embedcreds?
Such a git repo has a security hole embedded in it, and this needs to be
communicated to the user. Is the changelog enough?
2. enableremote won't work in such a repo, because git-annex will
try to decrypt the embedded creds, which are not encrypted, so fails.
This needs to be dealt with, especially for ecryption=shared repos,
which are not really broken, just inconsistently configured.
Noticing that problem for encryption=shared is what led to commit
cc54ff9e49260cd94f938e69e926a273e231ef4e, which tried to
fix the problem by not decrypting the embedded creds.
This commit was sponsored by Josh Taylor.
|
|
|
|
|
|
|
|
|
|
| |
the repository was configured with encryption=shared embedcreds=yes."
This reverts commit cc54ff9e49260cd94f938e69e926a273e231ef4e.
I can find no basis for that commit and think that I made it in error.
setRemoteCredPair always encrypts using the cipher from remoteCipher,
even when the cipher is shared.
|
|
|
|
| |
already done in indirect mode.
|
|
|
|
| |
introduced in version 5.20140817.)
|
|
|
|
| |
not yet supported on Windows.
|
|
|
|
| |
automatically shut down the assistant. Closes: #761261
|
| |
|
|
|
|
| |
This also works with 0.9, and probably 0.8.
|
|
|
|
| |
capacity/accuracy, fall back to a reasonable default bloom filter size.
|
|
|
|
| |
repository, rather than just the file's base name. Note that if you're relying on such things to keep files separate with WORM, you should really be using a better backend.
|
|
|
|
| |
direct mode. (Fixing a very minor reversion.)
|
|
|
|
| |
processes were both working to perform the same set of transfers.
|
| |
|
|
|
|
| |
key is present on a rsync remote, and when dropping a key from the remote.
|
|
|
|
| |
as an url.
|
|
|
|
|
|
|
|
|
|
| |
* New annex.hardlink setting. Closes: #758593
* init: Automatically detect when a repository was cloned with --shared,
and set annex.hardlink=true, as well as marking the repository as
untrusted.
Had to reorganize Logs.Trust a bit to avoid a cycle between it and
Annex.Init.
|
|
|
|
|
|
|
| |
It seems that all other uses of <div .col-sm-9> occur outside of
<div .content-box>. This one occurred inside it, when xmpp pairing.
This was introduced in the bootstrap 3 conversion.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
repository to another. Timestamps are still preserved as long as cp --preserve=timestamps is supported.
This avoids cp -a overriding the default mode acls that the user might have
set in a git repository.
With GNU cp, this behavior change should not be a breaking change, because
git-anex also uses rsync sometimes in the same situation, and has only ever
preserved timestamps when using rsync.
Systems without GNU cp will no longer use cp -a, but instead just cp.
So, timestamps will no longer be preserved. Preserving timestamps when
copying between repos is not guaranteed anyway.
Closes: #729757
|
| |
|
|
|
|
|
|
| |
Old behavior was to take the first fuzzy match. Now, it checks the globa
git config, and runs the normal fuzzy handling, including failing to run a
semi-random command by default.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to ensure that remotes on removable media can be unmounted. Closes: #758630
This does mean that eg, copying multiple files to a local remote will
become slightly slower, since it now restarts git-cat-file after each copy.
Should not be significant slowdown.
The reason git-cat-file is run on the remote at all is to update its
location log. In order to add an item to it, it needs to get the current
content of the log. Finding a way to avoid needing to do that would be a
good path to avoiding this slowdown if it does become a problem somehow.
This commit was sponsored by Evan Deaubl.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
them being inherited by child processes such as git commands.
(With the exception of daemon pid locking.)
This fixes at part of #758630. I reproduced the assistant locking eg, a
removable drive's annex journal lock file and forking a long-running
git-cat-file process that inherited that lock.
This did not affect Windows.
Considered doing a portable Utility.LockFile layer, but git-annex uses
posix locks in several special ways that have no direct Windows equivilant,
and it seems like it would mostly be a complication.
This commit was sponsored by Protonet.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note that this means getopt parsing is done even when not in a git
repository, even though currently cmdnorepo is not passed the results of
it. I'd like to move to cmdnorepo not doing its own ad-hoc option parsing,
so this is really a good thing. (But as long as eg, getOptionFlag needs an
Annex monad, it cannot be used in cmdnorepo handling.)
There is a potential for problems if any cmdnorepo branch of a command
handles options that are not in its regular getopt, but that would be a bug
anyway.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The hoary old HTTP library was only used when checking if an url exists,
when curl was not available. It had many problems, including not supporting
https at all.
Now, this is done using http-conduit for all urls that it supports. Falls
back to curl for any url that http-conduit doesn't like (probably ftp etc,
but could also be an url that its parser chokes on for whatever reason).
This adds a new dependency on http-conduit, but webdav support already
indirectly depended on that, and the s3-aws branch also uses it.
This opens up the possibility of using http-conduit for large file
downloads, but for now I've left it using wget/curl.
This commit was sponsored by Paul Tötterman.
|
|
|
|
| |
that already has a transfer lock file indicating it's being sent to that remote. The remote may have moved between networks, or reconnected.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
repository was configured with encryption=shared embedcreds=yes.
Since encryption=shared, the encryption key is stored in the git repo, so
there is no point at all in encrypting the creds, also stored in the git
repo with that key. So `initremote` doesn't. The creds are simply stored
base-64 encoded.
However, it then tried to always decrypt creds when encryption was used..
|
|
|
|
| |
subdirectory in the key name.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
replaceFileOr was broken and ran the rollback action always.
Luckily, for replaceFile, the rollback action was safe to run, since it
just nuked a temp file that had already been moved into place.
However, when `git annex direct` used replaeFileOr, its rollback printed a
scary message:
/home/joey/tmp/rrrr/.git/annex/misctmp/tmp32268: rename: does not exist (No such file or directory)
There was actually no bad result though.
|
| |
|
|
|
|
| |
cannot be unlocked due to disk space, and try all specified files.
|
|
|
|
|
|
| |
httpBodyRetriever will later also be used by S3
This commit was sponsored by Ethan Aubin.
|
| |
|
|
|
|
|
|
| |
The httpStorer will later also be used by S3.
This commit was sponsored by Torbjørn Thorsen.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This speeds up the webdav special remote somewhat, since it often now
groups actions together in a single http connection when eg, storing a
file.
Legacy chunks are still supported, but have not been sped up.
This depends on a as-yet unreleased version of DAV.
This commit was sponsored by Thomas Hochstein.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
support
Reusing http connection when operating on chunks is not done yet,
I had to submit some patches to DAV to support that. However, this is no
slower than old-style chunking was.
Note that it's a fileRetriever and a fileStorer, despite DAV using
bytestrings that would allow streaming. As a result, upload/download of
encrypted files is made a bit more expensive, since it spools them to temp
files. This was needed to get the progress meters to work.
There are probably ways to avoid that.. But it turns out that the current
DAV interface buffers the whole file content in memory, and I have
sent in a patch to DAV to improve its interfaces. Using the new interfaces,
it's certainly going to need to be a fileStorer, in order to read the file
size from the file (getting the size of a bytestring would destroy
laziness). It should be possible to use the new interface to make it be a
byteRetriever, so I'll change that when I get to it.
This commit was sponsored by Andreas Olsson.
|
|
|
|
|
|
| |
When files are stored using rsync, they have their write bit removed;
so does the directory they're put in. The local repo code did not turn
these bits back on, so failed to remove.
|
|
|
|
| |
Some reorg of Remote.Rsync code to export the things gcrypt needs.
|
|
|
|
|
| |
Chunking does not speed up rsync at all, so it's only useful for
interop with the directory special remote.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bup already splits files and does rolling deltas, so there is no reason to
use chunking here.
The new API made it easier to add progress support for storeKey, so that's
done. Unfortunately, bup-split still outputs its own progress with -q,
so a little ugly, but not too bad.
Made dropping remove the branch for an object, for two reasons:
1. The new API calls removeKey to roll back a storeKey when the content
changed unexpectedly.
2. So that testremote will be happy.
Also, fixed a bug that caused a crash when removing the branch for an
object in rollback.
|
|\ |
|