| Commit message (Collapse) | Author | Age |
|
|
|
| |
The meter code does that too.
|
| |
|
| |
|
|
|
|
|
|
| |
Higher than any other remote, this is mostly due to the long retrieval
time, so it'd make sense to get a file from nearly any other remote.
(Unless it's behind a very slow connection.)
|
| |
|
| |
|
|
|
|
| |
to the repository, by setting embedcreds=yes|no when running initremote.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
too small chunksize was configured.
Ensure that each file has something written to it, even if the bytestring
chunk size is greater than the configured chunksize.
This means we may write a bit larger than the configured value, but only
when the configured value is very small; ie, < 8 kb.
|
|
|
|
|
|
|
|
|
| |
Files are now written to a tmp directory in the remote, and once all
chunks are written, etc, it's moved into the final place atomically.
For now, checkpresent still checks every single chunk of a file, because
the old method could leave partially transferred files with some chunks
present and others not.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Both the directory and webdav special remotes used to have to buffer
the whole file contents before it could be decrypted, as they read
from chunks. Now the chunks are streamed through gpg with no buffering.
|
| |
|
|
|
|
|
| |
The Element data type changed to use a map of attributes. Rather than
ifdef, I'm avoiding directly using that data type.
|
| |
|
|
|
|
|
| |
Note that receiving encrypted chunked content currently involves buffering.
(So does doing so with the directory special remote.)
|
|
|
|
|
|
|
|
| |
This allows deleting all chunks for a file with a single http command,
so it's a win after all.
However, does not look in the mixed case hash directories, which were
in the past used by the directory, etc remotes.
|
|
|
|
| |
Not yet getting it though.
|
|
|
|
|
| |
However, directory still uses its optimzed chunked file writer, as it uses
less memory than the generic one in the helper.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The benefit of using a compatable directory structure does not outweigh the
cost in complexity of handling the multiple locations content can be stored
in directory special remotes. And this also allows doing away with the parent
directories, which can't be made unwritable in DAV, so have no benefit
there. This will save 2 http calls per file store.
But, kept the directory hashing, just in case.
|
| |
|
| |
|
|
|
|
|
| |
Doesn't actually store anything yet, but initremote works and tests the
server.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
a key's presence.
Also, use the new withQuietOutput function to avoid running the shell to
/dev/null stderr in two other places.
|
|
|
|
| |
which doesn't work with LDAP or NIS.
|
| |
|
|
|
|
|
|
|
| |
bup 0.25 does not accept that; and bup split reads from stdin by
default if no file is given. I'm not sure what version of bup changed this.
This only affected bup special remotes that were encrypted.
|
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
debian/changelog
git-annex.cabal
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
installed, and set annex-ignore.
Aka solve the github problem.
Note that it's possible the initial configlist will fail for some network
reason etc, and then the fetch succeeds. In this case, a usable remote gets
disabled. But it does print a message, and this only happens once per
remote, so that seems ok.
|
|/
|
|
| |
SampleVars from base are unsafe
|
|
|
|
|
| |
Made Git.LsFiles return cleanup actions, and everything waits on
processes now, except of course for Seek.
|
| |
|
|
|
|
|
|
|
|
| |
Rather than store decrypted creds in the environment, store them in the
creds cache file.
This way, a single git-annex can have multiple S3 remotes using different
creds.
|
| |
|
|
|
|
|
|
|
| |
When a transfer fails, the progress info can be used to intelligently
retry it. If the transfer managed to make some progress, but did not
fully complete, then there's a good chance that a retry will finish it
(or at least make more progress).
|
|
|
|
| |
Finally done with progressbars!
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Easy!
Note that with an encrypted remote, rsync will be sending a little more
data than the key size, so displayed progress may get to 100% slightly
quicker than it should. I doubt this is a big enough effect to worry about.
|