| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
| |
This makes -Jn work with --json and --quiet, where before
setting -Jn disabled those options.
Concurrent json output is currently a mess though since threads output
chunks over top of one-another.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
have the same costs.
Only done in -J mode because only if there's concurrency can downloading
from two remotes be faster. Without concurrency, it's likely the case that
sequential downloads from the same remote are faster than switching back
and forth between two remotes.
There is some hairy MVar code here, but basically it just keeps
the activeremotes MVar full except when deciding which remote to assign
to a thread.
Also affects gets by sync --content -J
This commit was sponsored by Jochen Bartl.
|
|
|
|
| |
unused after last commit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was disabled in commit 7ca8bf3321d1b62ea4e817e28914ed2fa56afe30,
because only the assistant used them, and they were clutter. But, now
--failed also uses them.
Remove the failure log files after successful transfers. Should avoid
most of the clutter problems.
Commit 7ca8bf3321d1b62ea4e817e28914ed2fa56afe30 mentions a subtle behavior
change, which has now been reverted:
There is one behavior change from this. If glacier is being used, and a
manual git annex get --from glacier fails because the file isn't available
yet, the assistant will no longer later see that failed transfer file and
retry the get.
|
|
|
|
|
|
|
|
|
| |
Note that get --from foo --failed will get things that a previous get --from bar
tried and failed to get, etc. I considered making --failed only retry
transfers from the same remote, but it was easier, and seems more useful,
to not have the same remote requirement.
Noisy due to some refactoring into Types/
|
|
|
|
|
|
|
|
|
|
|
| |
remote that does not have a UUID. This particularly impacted clones from gcrypt repositories.
Added guard in Annex.Transfer to prevent this problem at a deeper level.
I'm unhappy ith NoUUID, but having Maybe UUID instead wouldn't help either
if nothing checked that there was a UUID. Since there legitimately need to
be Remotes that do not have a UUID, I can't see a way to fix it at the type
level, short making there be two separate types of Remotes.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, the call to mkProgressUpdater created the directory as a
side-effect, but since that ignored failure to create it, this led to
a "does not exist" exception when the transfer lock file was created,
rather than a permissions error.
So, make sure the directory exists before trying to lock the file in it.
When a PermissionDenied exception is caught, skip making the transfer lock.
This lets downloads from readonly remotes happen.
If an upload is being tried, and the lock file can't be written due to
permissions, then probably the actual transfer will fail for the same
reason, so I think it's ok that it continues w/o taking the lock in that
case.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In c3b38fb2a075b4250e867ebd910324c65712c747, it actually only handled
uploading objects to a shared repository. To avoid verification when
downloading objects from a shared repository, was a lot harder.
On the plus side, if the process of downloading a file from a remote
is able to verify its content on the side, the remote can indicate this
now, and avoid the extra post-download verification.
As of yet, I don't have any remotes (except Git) using this ability.
Some more work would be needed to support it in special remotes.
It would make sense for tahoe to implicitly verify things downloaded from it;
as long as you trust your tahoe server (which typically runs locally),
there's cryptographic integrity. OTOH, despite bup being based on shas,
a bup repo under an attacker's control could have the git ref used for an
object changed, and so a bup repo shouldn't implicitly verify. Indeed,
tahoe seems unique in being trustworthy enough to implicitly verify.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The one exception is in Utility.Daemon. As long as a process only
daemonizes once, which seems reasonable, and as long as it avoids calling
checkDaemon once it's already running as a daemon, the fcntl locking
gotchas won't be a problem there.
Annex.LockFile has it's own separate lock pool layer, which has been
renamed to LockCache. This is a persistent cache of locks that persist
until closed.
This is not quite done; lockContent stil needs to be converted.
|
|
|
|
|
|
| |
get/unused/info commands are run.
Deleting lock files is tricky, tricky stuff. I think I got it right!
|
|
|
|
|
| |
This affected callers that used forwardRetry; if the 1st attempt failed it
would clean up the transfer lock before retrying.
|
|
|
|
|
|
| |
running at once.
As discussed in bug report.
|
|
|
|
|
|
|
|
| |
Should be no behavior changes, just simplified code.
The only actual difference is it doesn't truncate the lock file.
I think that was a holdover from when transfer info was written to the lock
file.
|
|
|
|
|
|
|
|
|
|
|
|
| |
used.
Only the assistant uses these, and only the assistant cleans them up, so
make only git annex transferkeys write them,
There is one behavior change from this. If glacier is being used, and a
manual git annex get --from glacier fails because the file isn't available
yet, the assistant will no longer later see that failed transfer file and
retry the get. Hope no-one depended on that old behavior.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoid using fileSize which maxes out at just 2 gb on Windows.
Instead, use hFileSize, which doesn't have a bounded size.
Fixes support for files > 2 gb on Windows.
Note that the InodeCache code only needs to compare a file size,
so it doesn't matter it the file size wraps. So it has been
left as-is. This was necessary both to avoid invalidating existing inode
caches, and because the code passed FileStatus around and would have become
more expensive if it called getFileSize.
This commit was sponsored by Christian Dietrich.
|
|
|
|
|
|
|
|
|
| |
This fixes all instances of " \t" in the code base. Most common case
seems to be after a "where" line; probably vim copied the two space layout
of that line.
Done as a background task while listening to episode 2 of the Type Theory
podcast.
|
|
|
|
| |
processes were both working to perform the same set of transfers.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
them being inherited by child processes such as git commands.
(With the exception of daemon pid locking.)
This fixes at part of #758630. I reproduced the assistant locking eg, a
removable drive's annex journal lock file and forking a long-running
git-cat-file process that inherited that lock.
This did not affect Windows.
Considered doing a portable Utility.LockFile layer, but git-annex uses
posix locks in several special ways that have no direct Windows equivilant,
and it seems like it would mostly be a complication.
This commit was sponsored by Protonet.
|
|
|
|
| |
that already has a transfer lock file indicating it's being sent to that remote. The remote may have moved between networks, or reconnected.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed old extensible-exceptions, only needed for very old ghc.
Made webdav use Utility.Exception, to work after some changes in DAV's
exception handling.
Removed Annex.Exception. Mostly this was trivial, but note that
tryAnnex is replaced with tryNonAsync and catchAnnex replaced with
catchNonAsync. In theory that could be a behavior change, since the former
caught all exceptions, and the latter don't catch async exceptions.
However, in practice, nothing in the Annex monad uses async exceptions.
Grepping for throwTo and killThread only find stuff in the assistant,
which does not seem related.
Command.Add.undo is changed to accept a SomeException, and things
that use it for rollback now catch non-async exceptions, rather than
only IOExceptions.
|
|
|
|
|
|
| |
For example, I had a copy to a remote that was failing for an unknown
reason. This let me see the exception was createDirectory: permission
denied; the underlying problem being a permissions issue.
|
| |
|
| |
|
|
Motivation: Hook scripts for nautilus or other file managers
need to provide the user with feedback that a file is being downloaded.
This commit was sponsored by THM Schoemaker.
|