| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most remotes have meters in their implementations of retrieveKeyFile
already. Simply hooking these up to the transfer log makes that information
available. Easy peasy.
This is particularly valuable information for encrypted remotes, which
otherwise bypass the assistant's polling of temp files, and so don't have
good progress bars yet.
Still some work to do here (see progressbars.mdwn changes), but this
is entirely an improvement from the lack of progress bars for encrypted
downloads.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was confusion in different parts of the progress bar code about
whether an update contained the total number of bytes transferred, or the
number of bytes transferred since the last update. One way this bug
showed up was progress bars that seemed to stick at zero for a long time.
In order to fix it comprehensively, I add a new BytesProcessed data type,
that is explicitly a total quantity of bytes, not a delta.
Note that this doesn't necessarily fix every problem with progress bars.
Particularly, buffering can now cause progress bars to seem to run ahead
of transfers, reaching 100% when data is still being uploaded.
|
|
|
|
| |
already in progress, rather than failing with no indication why.
|
|
|
|
|
|
| |
Rather than forking a git-annex transferkey only to have it fail,
just immediately record the failed transfer (so when the drive is plugged
in, the scan will retry it).
|
| |
|
| |
|
|
|
|
| |
This is possible now that we build-depend on QuickCheck.
|
|
|
|
| |
and the code gets better..
|
|
|
|
| |
Fixed a bug the quickcheck turned up.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The newline after the filename was included in it.
This was generally benign -- mostly these filenames are just displayed,
and the newline didn't matter.
But in the assistant, it caused unexpected dropping of preferred
content.
A characteristic of this bug is that the drop was displayed like this:
drop some_file
ok
|
| |
|
|
|
|
|
| |
This used to work, but got broken when the transfer info files were added,
as it failed writing them on the readonly filesystem.
|
| |
|
|
|
|
|
|
|
|
|
| |
Fix resuming of downloads, which do not have a transfer info file to read.
When checking upload progress, use the MVar, rather than re-reading
the info file.
Catch exceptions in the transfer action. Required a tryAnnex.
|
|
|
|
|
|
|
| |
When a transfer fails, the progress info can be used to intelligently
retry it. If the transfer managed to make some progress, but did not
fully complete, then there's a good chance that a retry will finish it
(or at least make more progress).
|
|
|
|
|
| |
TODO: Use this when running sendkey, to feed back transfer info from the
client side rsync.
|
|
|
|
| |
transferred
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Transfer info files are updated when the callback is called, updating
the number of bytes transferred.
Left unused p variables at every place the callback should be used.
Which is rather a lot..
|
|
|
|
| |
And add a form to add another, unrelated repository
|
| |
|
| |
|
|
|
|
| |
files and reading from checksum commands.
|
| |
|
| |
|
|
|
|
|
| |
Change alterTransferInfo to not merge in old values, including
transferPaused.
|
|
|
|
| |
bytesComplete value
|
|
|
|
| |
Doesn't fix the bug I thought it'd fix, but is clearly correct.
|
|
|
|
|
| |
Since it turned out to make sense to always scan all remotes on startup,
there's no need to persist the info about which have been scanned.
|
|
|
|
| |
of a remote
|
| |
|
|
|
|
|
|
| |
A paused transfer's thread keeps running, keeping the slot in use.
This is intentional; pausing a transfer should not let other
queued transfers to run in its place.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit includes a paydown on technical debt incurred two years ago,
when I didn't know that it was bad to make custom Read and Show instances
for types. As the routes need Read and Show for Transfer, which includes a
Key, and deriving my own Read instance of key was not practical,
I had to finally clean that up.
So the compact Key read and show functions are now file2key and key2file,
and Read and Show are now derived instances.
Changed all code that used the old instances, compiler checked.
(There were a few places, particularly in Command.Unused, and the test
suite where the Show instance continue to be used for legitimate
comparisons; ie show key_x == show key_y (though really in a bloom filter))
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Avoid crashing when "git annex get" fails to download from one location,
and falls back to downloading from a second location.
The problem is that git annex get calls download recursively from within
itself if the first download attempt fails. So the first time through, it
writes a transfer info file, which is then overwritten on the second,
recursive call. Then on cleanup, it tries to delete the file twice, which
of course doesn't work.
Fixed both by not crashing if the transfer file is removed, and by
changing Get to not run download recursively like that. It's the only
thing that did so, and it just seems like a bad idea.
|
| |
| |
| |
| | |
yowza!!!
|
| |
| |
| |
| | |
New log file format.
|
| | |
|
| |
| |
| |
| |
| |
| | |
This should fix OSX/BSD issues with not noticing transfer information
files with kqueue. Now that threads are used, the thread can manage the
transfer slot allocation and deallocation by itself; much cleaner.
|
| |
| |
| |
| |
| | |
Also converted its timestand to posix seconds, like is used in the other
log files.
|
| |
| |
| |
| |
| | |
This seems to happen with kqueue, not inotify. The newly added lck file
triggers an add event and was then parsed as a transfer file.
|
| | |
|
| |
| |
| |
| |
| | |
foo.lck could be a lock file for a transfer of foo, or a transfer of a key
that happened to end in ".lck". Fix this by using "lck.foo" instead.
|
| |
| |
| |
| |
| |
| |
| | |
Since the lock file has to be kept open, this prevented the TransferWatcher
from noticing when it appeared, since inotify (and more importantly kqueue)
events happen when a new file is closed. Writing a separate info file fixes
that problem.
|
| |
| |
| |
| |
| |
| | |
There's still a bug; if the child updates its transfer info file,
then the data from it will superscede the TransferInfo, losing the
info that we should wait on this child.
|
| | |
|
| |
| |
| |
| | |
finally!
|