| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
So far I've only written progress bars for sending files, not yet
receiving.
No longer uses external cp at all. ByteString IO is fast enough.
|
|
|
|
|
|
| |
Avoiding writing files larger than a specified size is useful on certian
things. For example, box.com has a file size limit of 100 mb. Could also
be useful on really crappy removable media.
|
|
|
|
| |
than the old "." (which still works too).
|
| |
|
|
|
|
|
| |
Without a unicode locale, it will fail to print a unicode filename to
console, and fails.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than go through the location log to see which files are present on
the remote, it simply looks at the disk contents directly.
I benchmarked this speeding up scanning 834 files, from an annex on my
phone's SSD, from 11.39 seconds to 1.31 seconds. (No files actually moved.)
Also benchmarked 8139 files, from an annex on spinning storage,
speeding up from 103.17 to 13.39 seconds.
Note that benchmarking with an encrypted annex on flash actually showed a
minor slowdown with this optimisation -- from 13.93 to 14.50 seconds. Seems
the overhead of doing the crypto needed to get the filenames to directly
check can be higher than the overhead of looking up data in the location
log. (Which says good things about how well the location log and git have
been optimised!) It *may* make sense to make encrypted local remotes not
have hasKeyCheap set; further benchmarking is called for.
|
|
|
|
|
| |
This is only to ensure that it's as new a version as it was built with, so
partial upgrades work.
|
|
|
|
| |
version of ssh and default annex.sshcaching accordingly.
|
|
|
|
|
|
|
|
| |
Added Annex.cleanup, which is a general purpose interface for adding
actions to run at the end.
Remotes with the old git-annex-shell will commit every time, and have no
commit command, so hide stderr when running the commit command.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now changes are staged into the branch's index, but not committed,
which avoids growing a large journal. And sync and merge always
explicitly commit, ensuring that even when they do nothing else,
they commit the staged changes.
Added a flag file to indicate that the branch's journal contains
uncommitted changes. (Could use git ls-files, but don't want to run
that every time.)
In the future, this ability to have uncommitted changes staged in the
journal might be used on remotes after a series of oneshot commands.
|
|
|
|
|
|
| |
To avoid commits of data to the git-annex branch after each command
is run, set annex.alwayscommit=false. Its data will then be committed
less frequently, when a merge or sync is done.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
removing content from the annex.
I was able to reproduce this on linux using the kernel's nfs server and
mounting localhost:/. Determined that removing the directory fails when
the just-deleted file in it was locked. Considered dropping the lock
before removing the directory, but this would complicate parts of the code
that should not need to worry about locking. So instead, ignore the failure
to remove the directory in this case.
While I was at it, made it attempt to remove both levels of hash
directories, in case they're empty.
|
|
|
|
|
|
|
|
| |
storing it in remotes/web/xx/yy/foo.log meant lots of extra directory
objects in git. Now I use xx/yy/foo.log.web, which is just as unique, but
more efficient since foo.log is there anyway.
Of course, it still looks in the old location too.
|
|
|
|
| |
files en masse.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
useful when adding hundreds of thousands of files on a system with plenty
of memory.
git add gets quite slow in such a large repository, so if the system has
more than the ~32 mb of memory the queue can use by default, it's a useful
optimisation to increase the queue size, in order to decrease the number
of times git add is run.
|
|
|
|
|
|
| |
The list of files had to be retained until the end so it could be deleted.
Also, a list of update-index lines was generated and only then fed into it.
Now everything streams in constant space.
|
|
|
|
|
|
| |
When hashing the files, the entire list of shas was read strictly.
That was entirely unnecessary, since there's a cleanup action run
after they're consumed.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Turns out that commit really made some serious improvements to memory use.
With the lazy state monad, git-annex add in a huge tree grew seemingly
without bound until it overflowed the stack. With the strict monad,
it uses 42 mb max.
It's possible another change since the 3.20120123 release fixed that,
but a964012fc36d22e4554dd12e3594579fb3190501 seems most likely.
|
|
|
|
| |
head), and records the size in the key.
|
|
|
|
|
|
|
|
| |
available, matches the size of the key.
If there's no Content-Length, or the key has no size, this check is not
done, but it should happen most of the time, and protect against web
content that has changed.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Can be used to specify what file the url is added to. This can be used to
override the default filename that is used when adding an url, which is
based on the url. Or, when the file already exists, the url is recorded as
another location of the file.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Done by adding a oneshot mode, in which location log changes are written to
the journal, but not committed. Taking advantage of git-annex's existing
ability to recover in this situation.
This is used by git-annex-shell and other places where changes are made to
a remote's location log.
|
|
|
|
|
|
|
| |
This drops the >>! and >>? with the nice low fixity. IfElse does have
undocumented >>=>>! and >>=>>? operators, but I deem that too fishy.
Anyway, using whenM and unlessM is easier; I sometimes mixed the operators
up.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Ssh connection caching is now enabled automatically by git-annex. Only one
ssh connection is made to each host per git-annex run, which can speed some
things up a lot, as well as avoiding repeated password prompts. Concurrent
git-annex processes also share ssh connections. Cached ssh connections are
shut down when git-annex exits.
Note: The rsync special remote does not yet participate in the ssh
connection caching.
|
|
|
|
|
|
|
| |
Avoids expensive file transfers, at the expense of checking file size
and/or contents.
Required some reworking of the remote code.
|
|
|
|
|
|
|
|
|
| |
Add news item recommending fscking directory special remotes.
Remote news item about URL backend being removed; it was later added back
to be used by git annex addurl --fast.
Link NEWS into top level.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fscking a remote is now supported. It's done by retrieving
the contents of the specified files from the remote, and checking them,
so can be an expensive operation.
(Several optimisations are possible, to speed it up, of course.. This is
the slow and stupid remote fsck to start with.)
Still, if the remote is a special remote, or a git repository that you
cannot run fsck in locally, it's nice to have the ability to fsck it.
If you have any directory special remotes, now would be a good time to
fsck them, in case you were hit by the data loss bug fixed in the
previous release!
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When moving a file to the remote failed, and partially transferred content
was left behind in the directory, re-running the same move would think it
succeeded and delete the local copy.
I reproduced data loss when moving files to a partition that was almost
full. Interrupting a transfer could have similar results.
Easily fixed by using a temp file which is then moved atomically into place
once the transfer completes.
I've audited other calls to copyFileExternal, and other special remote
file transfer code; everything else seems to use temp files correctly
(rsync, git), or otherwise use atomic transfers (bup, S3).
|
|
|
|
| |
kills that ugly python message during build
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
This way, the build log will indicate whether StatFS can be relied on.
I've tested all the failing architectures now, and on all of them,
the StatFS code now returns Nothing, rather than Just nonsense.
Also, if annex.diskreserve is set on a platform where StatFS is not
working, git-annex will complain.
Also, the Makefile was missing the sources target used when building with
cabal.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
git-annex FTBFS on s390, mips, powerpc, sparc. That StatFS code is failing
on all of them. At least on s390, the failure appears as:
Just (FileSystemStats {fsStatBlockSize = 4096, fsStatBlockCount = 0,
fsStatByteCount = 0, fsStatBytesFree = 0, fsStatBytesAvailable = 0,
fsStatBytesUsed = 0})
While I don't understand why this is happening, or how to fix it,
bandaid over it by checking for obviously bad values and returning Nothing.
That disables disk free space checking, but at least git-annex will work.
Upstream bug: http://code.google.com/p/xmobar/issues/detail?id=70
|
| |
|