| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
| |
symlinks when downloading from ftp.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Ambiguous occurrence `bracket'
It could refer to either `Control.Exception.bracket',
imported from `Control.Exception' at Utility/FileMode.hs:14:27-33
(and originally defined in `Control.Exception.Base')
or `Utility.Exception.bracket',
imported from `Utility.Exception' at Utility/FileMode.hs:22:1-24
(and originally defined in `Control.Monad.Catch')
|
| |
|
|
|
|
|
|
| |
Jessie.
466 lines of compat cruft deleted!
|
| |
|
| |
|
|
|
|
| |
It's a code smell, can lead to hard to diagnose error messages.
|
|
|
|
| |
repo does not have a copy of the content, preserve the bad content in .git/annex/bad/ to avoid further data loss.
|
| |
|
|
|
|
|
|
| |
Since we started using this for git repos, when a remote was on another
drive, it resulted in a bogus relative path to it being used by git-annex,
which didn't work.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I don't quite understand the cause of the deadlock. It only occurred
when git-annex-shell transferinfo was being spawned over ssh to feed
download transfer progress back. And if I removed this line from
feedprogressback, the deadlock didn't occur:
bytes <- readSV v
The problem was not a leaked FD, as far as I could see. So what was it?
I don't know.
Anyway, this is a nice clean implementation, that avoids the deadlock.
Just fork off the async threads to handle filtering the stdout and stderr,
and let them clean up their handles whenever they decide to exit.
I've verified that the handles do get promptly closed, although a little
later than I would expect. Presumably that "little later" is what
was making waiting on the threads deadlock.
Despite the late exit, the last line of stdout and stderr appears where
I'd want it to, so I guess this is ok..
|
|
|
|
|
|
|
|
|
|
| |
commands to hang at the end of a file transfer.
Stderr reader blocks waiting for all stderr, and so blocks the process ever
exiting.
I tried several ways to get around this, but no success yet. For now,
disable the stderr reader entirely.
|
|
|
|
|
|
|
|
|
|
| |
(eep!)
It sounds worse than it is. ;)
Some external special remotes may run commands that display progress on
stderr. If git-annex is run with --quiet, this should filter out such
displays while letting the errors through.
|
|
|
|
|
|
|
| |
Came up with a generic way to filter out progress messages while keeping
errors, for commands that use stderr for both.
--json mode will disable command outputs too.
|
|
|
|
| |
This was broken in commit 95418cc430284b65af13105f7c63da08908dd826
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
single git-annex command. (try 2)
New approach is to do it the expensive way for the first 100 paths
on the command line, but then assume the user doesn't care about order too
much and fall back to the cheap way that does not preserve order.
|
|
|
|
|
|
|
|
|
| |
passed to a single git-annex command."
This reverts commit 5492d8f4ddbd398e0188da9daed840908d1198c0.
Whoops, git ls-files does not always output in the input ordering.
That's why all this work is needed. Urk.
|
| |
|
| |
|
| |
|
|
|
|
| |
single git-annex command.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In this situation, curl -o exits successfully without creating the output
file.
There was already a workaround for curl file:/// but I did not realize this
also affected regular url downloads.
To fix it, pre-create the destination file before starting curl.
Since we cannot always know the size of an url before trying to download
it, let's always do this.
Note that since curl is told -C -, we have to consider if this
makes curl try to do a ranged download, which might fail on some servers
where a regular download would have succeeded. My testing indicates
this isn't a problem; since the file is empty, curl seems to not try to
do a ranged download.
Original report: https://github.com/datalad/datalad/issues/79
Curl bug report: https://github.com/bagder/curl/issues/183
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
strings that contained both unicode characters and a space (or '!') character.
The fix is to stop using w82s, which does not properly reconstitute unicode
strings. Instrad, use utf8 bytestring to get the [Word8] to base64. This
passes unicode through perfectly, including any invalid filesystem encoded
characters.
Note that toB64 / fromB64 are also used for creds and cipher
embedding. It would be unfortunate if this change broke those uses.
For cipher embedding, note that ciphers can contain arbitrary bytes (should
really be using ByteString.Char8 there). Testing indicated it's not safe to
use the new fromB64 there; I think that characters were incorrectly
combined.
For credpair embedding, the username or password could contain unicode.
Before, that unicode would fail to round-trip through the b64.
So, I guess this is not going to break any embedded creds that worked
before.
This bug may have affected some creds before, and if so,
this change will not fix old ones, but should fix new ones at least.
|
|
|
|
| |
(last release is ok)
|
|
|
|
|
|
|
|
|
|
|
|
| |
hGetSomeString reads one byte at a time, so unicode bytes are not composed.
The problem comes when outputting that to the console with hPut; that
tried to apply the handle's encoding, and so we get mojibake.
Instead, use ByteStrings, and only convert it to a string for parsing, not
for display.
Note that there are a couple of other things that use hGetSomeString,
which I've left as-is for now.
|
|
|
|
|
|
|
| |
process-1.2
createProcess has been changed to throw an exception if the program is
not in path.
|
|
|
|
| |
staged that contained backslashes.
|
| |
|
|
|
|
|
| |
This will prevent backporting to wheezy, but it's time to simplify the
code.
|
| |
|
|
|
|
| |
will consider using it, if it's reasonable and doesn't conflict with an existing file. (--file overrides this)
|
| |
|
|
|
|
| |
exporting Unit allows custom data units
|
|
|
|
|
|
|
|
|
| |
This reverts commit b6b368ed036f2e34ee4b7d39e5b41b1ba2d0a76c.
Consider: relPathDirToFile (absPathFrom "/tmp/repo/xxx" "y/bar") "/tmp/repo/.git/annex/objects/xxx"
This needs to always yield "../../../.git/annex/objects/xxx" but on
Windows, it is "..\\..\\/tmp/repo/.git/annex/objects/xxx"
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This is necessary for interop between inode caches created on unix and
windows. Which is more important than supporting inodecaches for large keys
with the wrong size, which are broken anyway.
There should be no slowdown from this change, except on Windows.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid using fileSize which maxes out at just 2 gb on Windows.
Instead, use hFileSize, which doesn't have a bounded size.
Fixes support for files > 2 gb on Windows.
Note that the InodeCache code only needs to compare a file size,
so it doesn't matter it the file size wraps. So it has been
left as-is. This was necessary both to avoid invalidating existing inode
caches, and because the code passed FileStatus around and would have become
more expensive if it called getFileSize.
This commit was sponsored by Christian Dietrich.
|