| Commit message (Collapse) | Author | Age |
... | |
| |
|
| |
|
|
|
|
| |
rsync does not have a --no-delete, so do it this way instead
|
|
|
|
|
|
|
|
|
| |
Fully tested and working, including resuming and encryption. (Though not
resuming when sending *with* encryption; gpg doesn't produce identical
output each time.)
Uses same layout as the directory special remote and the .git/annex/objects/
directory.
|
| |
|
|
|
|
| |
Provide file size to new version of hS3.
|
|
|
|
|
| |
Finished applying to S3 the change that fixed the memory leak in bup, but
it didn't seem to help S3.. with encryption it still grows to 2x file size.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was a most surprising leak. It occurred in the process that is forked
off to feed data to gpg. That process was passed a lazy ByteString of
input, and ghc seemed to not GC the ByteString as it was lazily read
and consumed, so memory slowly leaked as the file was read and passed
through gpg to bup.
To fix it, I simply changed the feeder to take an IO action that returns
the lazy bytestring, and fed the result directly to hPut.
AFAICS, this should change nothing WRT buffering. But somehow it makes
ghc's GC do the right thing. Probably I triggered some weakness in ghc's
GC (version 6.12.1).
(Note that S3 still has this leak, and others too. Fixing it will involve
another dance with the type system.)
Update: One theory I have is that this has something to do with
the forking of the feeder process. Perhaps, when the ByteString
is produced before the fork, ghc decides it need to hold a pointer
to the start of it, for some reason -- maybe it doesn't realize that
it is only used in the forked process.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Stalls were caused by code that did approximatly:
content' <- liftIO $ withEncryptedContent cipher content return
store content'
The return evaluated without actually reading content from S3,
and so the cleanup code began waiting on gpg to exit before
gpg could send all its data.
Fixing it involved moving the `store` type action into the IO monad:
liftIO $ withEncryptedContent cipher content store
Which was a bit of a pain to do, thank you type system, but
avoids the problem as now the whole content is consumed, and
stored, before cleanup.
|
| |
|
| |
|
|
|
|
| |
On second thought, "unlocking" is confusable with git-annex unlock.
|
| |
|
|
|
|
|
|
| |
Untested, I will need to dust off my S3 keys, and plug the modem back in
that was unplugged last night due to very low battery bank power. But it
compiles, so it's probably perfect. :)
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Forking a new process rather than relying on a thread to feed gpg.
The feeder thread was stalling, probably when the main thread got
to the point it was wait()ing on the gpg to exit.
|
|
|
|
|
| |
Some kind of laziness issue that I don't want to debug right now,
and decryption is not implemented.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Encrypted remotes don't yet encrypt data, but git annex initremote can
be used to generate a cipher and add additional gpg keys that can use it.
|
| |
|
|
|
|
|
| |
I don't trust the location log, even for bup. Too many things could go
wrong.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Instead of remote=, use buprepo=
Anyone already using bup will need to re-run git annex initremote.
|
| |
|
|
|
|
| |
does have to run bup and reassemble files, after all
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Since the queue is flushed in between subcommand actions being run,
there should be no issues with actions that expect to queue up some stuff
and have it run after they do other stuff. So I didn't have to audit for
such assumptions.
|
|
|
|
|
|
| |
to avoid some issues with git on OSX with the mixed-case directories. No
migration is needed; the old mixed case hash directories are still read;
new information is written to the new directories.
|
| |
|
| |
|
| |
|
|
|
|
| |
And same file perms.
|
|
|
|
|
|
| |
Two machines might have access to the same directory remote on different
paths, so don't include the path in its persistent config, instead use
the git config to record it.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
So, it would be nicer to just use Cabal and take advantage
of its conditional compilation support. But, Cabal seems to
lack good support for a package with an internal library that is used by
multiple executables. It wants to build everything twice or more.
That's too slow for me.
Anyway, fairly soon, I expect to upgrade hS3 to a requirment, and I
can just revert this.
|
| |
|
| |
|
| |
|