summaryrefslogtreecommitdiff
path: root/Remote
Commit message (Collapse)AuthorAge
* optimise initialized checkGravatar Joey Hess2011-08-17
| | | | Avoid running external command if annex.version is set.
* when reading configs of local repos, first initializeSafeGravatar Joey Hess2011-08-17
| | | | This auto-generates a uuid if the local repo does not already have one.
* error out when dropping from http repoGravatar Joey Hess2011-08-16
|
* support for getting files from http git remotesGravatar Joey Hess2011-08-16
|
* reorg Remote/*Gravatar Joey Hess2011-08-16
|
* split out generic url stuff into a helper library from Remote.WebGravatar Joey Hess2011-08-16
|
* support reading git config from http remotesGravatar Joey Hess2011-08-16
| | | | | The config file is downloaded to a temp file, and git-config run on that to parse it.
* fix file name for web remote log filesGravatar Joey Hess2011-08-06
| | | | | | The key name was not being sufficiently escaped, although it didn't break anything due to luck. Switch to properly escaped key names for the log filename, with a fallback to the buggy old name.
* Fix shell escaping in rsync special remote.Gravatar Joey Hess2011-07-29
|
* unify elipsis handlingGravatar Joey Hess2011-07-19
| | | | | And add a simple dots-based progress display, currently only used in v2 upgrade.
* finished hlint passGravatar Joey Hess2011-07-15
|
* renameGravatar Joey Hess2011-07-05
|
* renameGravatar Joey Hess2011-07-05
|
* remove unused backend machineryGravatar Joey Hess2011-07-05
| | | | | | | | | | | | | The only remaining vestiage of backends is different types of keys. These are still called "backends", mostly to avoid needing to change user interface and configuration. But everything to do with storing keys in different backends was gone; instead different types of remotes are used. In the refactoring, lots of code was moved out of odd corners like Backend.File, to closer to where it's used, like Command.Drop and Command.Fsck. Quite a lot of dead code was removed. Several data structures became simpler, which may result in better runtime efficiency. There should be no user-visible changes.
* Drop the dependency on the haskell curl bindings, use regular haskell HTTP.Gravatar Joey Hess2011-07-04
|
* make curl follow redirsGravatar Joey Hess2011-07-01
|
* download urls via tmp file, and support resumingGravatar Joey Hess2011-07-01
|
* add hashing to web log filesGravatar Joey Hess2011-07-01
|
* add the addurl commandGravatar Joey Hess2011-07-01
|
* add web special remoteGravatar Joey Hess2011-07-01
| | | | | Generalized LocationLog to PresenceLog, and use a presence log to record urls for the web special remote.
* renamed GitRepo to GitGravatar Joey Hess2011-06-30
| | | | It was always imported qualified as Git anyway
* commit git-annex branch when copying to a remote (locally)Gravatar Joey Hess2011-06-22
| | | | | | | Otherwise, the location log changes are only staged in its index, and this can confuse matters if pulling or cloning from the remote. The test suite was failing because this wasn't done.
* bigfix: stat parent dirsGravatar Joey Hess2011-06-13
|
* rsync is now used when copying files from repos on other filesystemsGravatar Joey Hess2011-06-13
| | | | | | | | | cp is still used when copying file from repos on the same filesystem, since --reflink=auto can make it significantly faster on filesystems such as btrfs. Directory special remotes still use cp, not rsync. It's not clear what tmp file should be used when rsyncing to such a remote.
* fix building with S3 stubGravatar Joey Hess2011-06-10
|
* rename modules for data types into Types/ directoryGravatar Joey Hess2011-06-01
|
* Add --debug option. Closes: #627499Gravatar Joey Hess2011-05-21
| | | | | | | This takes advantage of the debug logging done by missingh, and I added my own debug messages for executeFile calls. There are still some other low-level ways git-annex runs stuff that are not shown by debugging, but this gets most of it easily.
* more standard names for whenM and unlessM operatorsGravatar Joey Hess2011-05-17
| | | | | | | These are defined in ifelse, but it's not currently available and I don't want to pull in a library for 6 lines of code anyhow. Also, ifelse sets the fixity to 1, which does not allow >>? error $ ...
* add whenM and unlessMGravatar Joey Hess2011-05-17
| | | | | Just more golfing.. I am pretty sure something in a library somewhere can do this, but I have been unable to find it.
* more pointless monadic golfingGravatar Joey Hess2011-05-16
|
* IA: do not create bucket at initremote timeGravatar Joey Hess2011-05-16
| | | | | This way, the metadata sent when uploading a file is applied to the bucket then.
* add a few tweaks to make it easy to use the Internet Archive's variant of S3Gravatar Joey Hess2011-05-16
| | | | | | | | In particular, munge key filenames to comply with the IA's filename limits, disable encryption, support their nonstandard way of creating buckets, and allow x-amz-* headers to be specified in initremote to set item metadata. Still TODO: initremote does not handle multiword metadata headers right.
* refactorGravatar Joey Hess2011-05-16
|
* Maybe reduction pass 2Gravatar Joey Hess2011-05-15
|
* simplified a bunch of Maybe handlingGravatar Joey Hess2011-05-15
|
* avoid always decrypting cipherGravatar Joey Hess2011-05-01
| | | | | Last change moved cipher decryption to remote setup time. Fixed this with a bit of a hack.
* factor out base64 codeGravatar Joey Hess2011-05-01
|
* S3: When encryption is enabled, the Amazon S3 login credentials are stored, ↵Gravatar Joey Hess2011-05-01
| | | | encrypted, in .git-annex/remotes.log, so environment variables need not be set after the remote is initialized.
* set ANNEX_HASH_* alwaysGravatar Joey Hess2011-04-29
|
* hook special remote implemented, and testedGravatar Joey Hess2011-04-28
|
* Fix hasKeyCheap setting for bup and rsync special remotes.Gravatar Joey Hess2011-04-28
|
* filter out --delete rsync optionGravatar Joey Hess2011-04-27
| | | | rsync does not have a --no-delete, so do it this way instead
* rsync special remoteGravatar Joey Hess2011-04-27
| | | | | | | | | Fully tested and working, including resuming and encryption. (Though not resuming when sending *with* encryption; gpg doesn't produce identical output each time.) Uses same layout as the directory special remote and the .git/annex/objects/ directory.
* ensure tmp dir existsGravatar Joey Hess2011-04-21
|
* fix S3 upload buffering problemGravatar Joey Hess2011-04-21
| | | | Provide file size to new version of hS3.
* update on memory leakGravatar Joey Hess2011-04-19
| | | | | Finished applying to S3 the change that fixed the memory leak in bup, but it didn't seem to help S3.. with encryption it still grows to 2x file size.
* bup: Avoid memory leak when transferring encrypted data.Gravatar Joey Hess2011-04-19
| | | | | | | | | | | | | | | | | | | | | | | | This was a most surprising leak. It occurred in the process that is forked off to feed data to gpg. That process was passed a lazy ByteString of input, and ghc seemed to not GC the ByteString as it was lazily read and consumed, so memory slowly leaked as the file was read and passed through gpg to bup. To fix it, I simply changed the feeder to take an IO action that returns the lazy bytestring, and fed the result directly to hPut. AFAICS, this should change nothing WRT buffering. But somehow it makes ghc's GC do the right thing. Probably I triggered some weakness in ghc's GC (version 6.12.1). (Note that S3 still has this leak, and others too. Fixing it will involve another dance with the type system.) Update: One theory I have is that this has something to do with the forking of the feeder process. Perhaps, when the ByteString is produced before the fork, ghc decides it need to hold a pointer to the start of it, for some reason -- maybe it doesn't realize that it is only used in the forked process.
* refactorGravatar Joey Hess2011-04-19
|
* Fix stalls in S3 when transferring encrypted data.Gravatar Joey Hess2011-04-19
| | | | | | | | | | | | | | | | | | | Stalls were caused by code that did approximatly: content' <- liftIO $ withEncryptedContent cipher content return store content' The return evaluated without actually reading content from S3, and so the cleanup code began waiting on gpg to exit before gpg could send all its data. Fixing it involved moving the `store` type action into the IO monad: liftIO $ withEncryptedContent cipher content store Which was a bit of a pain to do, thank you type system, but avoids the problem as now the whole content is consumed, and stored, before cleanup.
* initremote: show gpg keysGravatar Joey Hess2011-04-17
|