| Commit message (Collapse) | Author | Age |
|
|
|
| |
repository.
|
|
|
|
|
|
| |
annex/objects but didn't reach the work tree.
This also handles fixing up after f9dfeaf801da2e4d5879b3de5895dc3cef68a329
|
|
|
|
|
| |
This also handles fixing up after the bad data written by
f9dfeaf801da2e4d5879b3de5895dc3cef68a329.
|
|
|
|
| |
When on crippled filesystem, or without annex.thin set.
|
| |
|
| |
|
|
|
|
| |
needs it.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This only adds 1 stat to each file fscked for locked files, so
added overhead is minimal.
For unlocked files it has to access the database to see if a file
is modified.
|
| |
|
| |
|
|
|
|
|
| |
keyLocations doesn't return locations in dead repos, but if we're fscking a
dead repo, we want to look at what locations are actually logged for it.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In c3b38fb2a075b4250e867ebd910324c65712c747, it actually only handled
uploading objects to a shared repository. To avoid verification when
downloading objects from a shared repository, was a lot harder.
On the plus side, if the process of downloading a file from a remote
is able to verify its content on the side, the remote can indicate this
now, and avoid the extra post-download verification.
As of yet, I don't have any remotes (except Git) using this ability.
Some more work would be needed to support it in special remotes.
It would make sense for tahoe to implicitly verify things downloaded from it;
as long as you trust your tahoe server (which typically runs locally),
there's cryptographic integrity. OTOH, despite bup being based on shas,
a bup repo under an attacker's control could have the git ref used for an
object changed, and so a bup repo shouldn't implicitly verify. Indeed,
tahoe seems unique in being trustworthy enough to implicitly verify.
|
|
|
|
| |
No behavior changes.
|
|
|
|
|
|
|
|
|
| |
appropriate places.
Not necessarily everywhere, but a lot of the most often used places.
Re the use of .Internal, see
https://github.com/pcapriotti/optparse-applicative/issues/155
|
|
|
|
|
|
|
|
| |
Ben Boeckel had a patch, but..
Actually, that was not the only place that used ScheduleIncremental when
built w/o database. Since the data type doesn't need database stuff,
I've instead fixed this build problem by exposing the
ScheduleIncremental constructor to database-less builds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Perform a clean shutdown when --time-limit is reached.
This includes running queued git commands, and cleanup actions normally
run when a command is finished.
* fsck: Commit incremental fsck database when --time-limit is reached.
Previously, some of the last files fscked did not make it into the
database when using --time-limit.
Note that this changes Annex.addCleanup hooks, to run after --time-limit
expires. Fsck was using such a hook to clean up after a
--incremental-schedule, and that shouldn't run when --time-limit exipires
it. So, instead, moved that cleanup code to be run by cleanupIncremental.
Resulted in some data type juggling.
|
|
|
|
| |
This removes support for incremental fsck.
|
| |
|
| |
|
|
|
|
| |
Got a little tricky..
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Global options and seeking and key options are still to be done.
|
|
|
|
| |
Still no options though.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a work in progress. It compiles and is able to do basic command
dispatch, including git autocorrection, while using optparse-applicative
for the core commandline parsing.
* Many commands are temporarily disabled before conversion.
* Options are not wired in yet.
* cmdnorepo actions don't work yet.
Also, removed the [Command] list, which was only used in one place.
|
| |
|
|
|
|
| |
in a bare repo. Otherwise, still reports files with lost contents, even if the content is dead.
|
|
|
|
| |
annex.diskreserve.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Conflicts:
Command/Fsck.hs
Messages.hs
Remote/Directory.hs
Remote/Git.hs
Remote/Helper/Special.hs
Types/Remote.hs
debian/changelog
git-annex.cabal
|
| |
| |
| |
| |
| |
| |
| | |
This no longer uses old-locale's defaultTimeLocale, but provides one
of its own.
Factored out a Logs.TimeStamp.
|
| |
| |
| |
| | |
when running fsck in a read-only repository. Closes: #698559 (fsck can still need to write to the repository if it find problems, but a successful fsck can be done read-only)
|
| | |
|
| | |
|
| |
| |
| |
| | |
annex.diskreserve limit.
|
| |
| |
| |
| | |
repo does not have a copy of the content, preserve the bad content in .git/annex/bad/ to avoid further data loss.
|
|/
|
|
| |
This needed plumbing an AssociatedFile through retrieveKeyFileCheap.
|
|
|
|
| |
This is much more space efficient!
|
| |
|
|
|
|
| |
Not a behavior change unless you were passing it to a command that ignored it.
|
|
|
|
| |
in progress at the same time in the same repo without it getting confused about which files have been checked for which remotes.
|
|
|
|
|
|
|
|
|
|
|
| |
Turns out sqlite does not like having its database deleted out from
underneath it. It might suffice to empty the table, but I would rather
start each fsck over with a new database, so I added a lock file, and
running incremental fscks use a shared lock.
This leaves one concurrency bug left; running two concurrent fsck --more
will lead to: "SQLite3 returned ErrorBusy while attempting to perform step."
and one or both will fail. This is a concurrent writers problem.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Database.Handle can now be given a CommitPolicy, making it easy to specify
transaction granularity.
Benchmarking the old git-annex incremental fsck that flips sticky bits
to the new that uses sqlite, running in a repo with 37000 annexed files,
both from cold cache:
old: 6m6.906s
new: 6m26.913s
This commit was sponsored by TasLUG.
|
|
|
|
|
|
| |
This makes interrupt and resume work, robustly.
But, incremental fsck is slowed down by all those transactions..
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Did not keep backwards compat for sticky bit records. An incremental fsck
that is already in progress will start over on upgrade to this version.
This is not yet ready for merging. The autobuilders need to have sqlite
installed.
Also, interrupting a fsck --incremental does not commit the database.
So, resuming with fsck --more restarts from beginning.
Memory: Constant during a fsck of tens of thousands of files.
(But, it does seem to buffer whole transation in memory, so
may really scale with number of files.)
CPU: ?
|