| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
|
| |
With an encrypted rsync remote, the encrpyted file can be renamed, rather
than being copied, in crippled filesystem mode. This gets back to just as
fast as non-crippled mode for this very common case.
|
|
|
|
|
|
|
|
|
|
| |
Cannot make a hard link, have to copy.
I did find a way to make it work without setting up a tree, just using
--include and --exclude. But it needs the same hash directories to be used
on both sides, which is normally not the case. Still, I hope one day I will
convert non-bare repos to use the same hash dirs as everything else, and
then this will get more efficient.
|
|
|
|
|
|
|
|
|
|
|
|
| |
git annex init probes for crippled filesystems, and sets direct mode, as
well as `annex.crippledfilesystem`.
Avoid manipulating permissions of files on crippled filesystems.
That would likely cause an exception to be thrown.
Very basic support in Command.Add for cripped filesystems; avoids the lock
down entirely since doing it needs both permissions and hard links.
Will make this better soon.
|
|
|
|
|
| |
Checks the key's size and checksum. This is sorta expensive, but it avoids
needing to add another round-trip to the protocol.
|
|
|
|
| |
Only missing direct mode transfer check now is git-annex shell recvkey.
|
| |
|
|
|
|
| |
the transfer, which can happen in direct mode.
|
| |
|
| |
|
|
|
|
|
| |
Still a couple of places that use git config ad-hoc, but this is most of it
done.
|
|\ |
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
| |
However, I don't yet have a reliable way to deal with files being modified
while they're being transferred. I have code that detects it on the sending
side, but the receiver is still free to move the wrong content into its
annex, and record that it has the content. So that's not acceptable, and
I'll need to work on it some more.
However, at this point I can use a direct mode repository as a remote and
transfer files from and to it.
|
|
|
|
| |
livedrive.com. Needs DAV version 0.3.
|
| |
|
|
|
|
| |
currently-supported AWS regions.
|
| |
|
| |
|
| |
|
|
|
|
| |
one-click enabling of the repository.
|
|
|
|
| |
remotes.
|
|
|
|
| |
The meter code does that too.
|
| |
|
| |
|
|
|
|
|
|
| |
Higher than any other remote, this is mostly due to the long retrieval
time, so it'd make sense to get a file from nearly any other remote.
(Unless it's behind a very slow connection.)
|
| |
|
| |
|
|
|
|
| |
to the repository, by setting embedcreds=yes|no when running initremote.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
too small chunksize was configured.
Ensure that each file has something written to it, even if the bytestring
chunk size is greater than the configured chunksize.
This means we may write a bit larger than the configured value, but only
when the configured value is very small; ie, < 8 kb.
|
|
|
|
|
|
|
|
|
| |
Files are now written to a tmp directory in the remote, and once all
chunks are written, etc, it's moved into the final place atomically.
For now, checkpresent still checks every single chunk of a file, because
the old method could leave partially transferred files with some chunks
present and others not.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Both the directory and webdav special remotes used to have to buffer
the whole file contents before it could be decrypted, as they read
from chunks. Now the chunks are streamed through gpg with no buffering.
|
| |
|
|
|
|
|
| |
The Element data type changed to use a map of attributes. Rather than
ifdef, I'm avoiding directly using that data type.
|
| |
|
|
|
|
|
| |
Note that receiving encrypted chunked content currently involves buffering.
(So does doing so with the directory special remote.)
|
|
|
|
|
|
|
|
| |
This allows deleting all chunks for a file with a single http command,
so it's a win after all.
However, does not look in the mixed case hash directories, which were
in the past used by the directory, etc remotes.
|
|
|
|
| |
Not yet getting it though.
|
|
|
|
|
| |
However, directory still uses its optimzed chunked file writer, as it uses
less memory than the generic one in the helper.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The benefit of using a compatable directory structure does not outweigh the
cost in complexity of handling the multiple locations content can be stored
in directory special remotes. And this also allows doing away with the parent
directories, which can't be made unwritable in DAV, so have no benefit
there. This will save 2 http calls per file store.
But, kept the directory hashing, just in case.
|
| |
|