summaryrefslogtreecommitdiff
path: root/Database
Commit message (Collapse)AuthorAge
* clarify absPathFromGravatar Joey Hess2016-01-05
| | | | | | | | | | | | The repo path is typically relative, not absolute, so providing it to absPathFrom doesn't yield an absolute path. This is not a bug, just unclear documentation. Indeed, there seem to be no reason to simplifyPath here, which absPathFrom does, so instead just combine the repo path and the TopFilePath. Also, removed an export of the TopFilePath constructor; asTopFilePath is provided to construct one as-is.
* use TopFilePath for associated filesGravatar Joey Hess2016-01-05
| | | | | | | | | | | | | | | Fixes several bugs with updates of pointer files. When eg, running git annex drop --from localremote it was updating the pointer file in the local repository, not the remote. Also, fixes drop ../foo when run in a subdir, and probably lots of other problems. Test suite drops from ~30 to 11 failures now. TopFilePath is used to force thinking about what the filepath is relative to. The data stored in the sqlite db is still just a plain string, and TopFilePath is a newtype, so there's no overhead involved in using it in DataBase.Keys.
* improve data typeGravatar Joey Hess2016-01-01
|
* wait for git lstree to exitGravatar Joey Hess2016-01-01
|
* only do scan when there's a branch, not in freshly created new repoGravatar Joey Hess2016-01-01
|
* scan for unlocked files on init/upgrade of v6 repoGravatar Joey Hess2016-01-01
|
* fix build with pre-AMP ghcGravatar Joey Hess2015-12-28
|
* fix build with pre-AMP GHCGravatar Joey Hess2015-12-28
|
* unused importGravatar Joey Hess2015-12-24
|
* typoGravatar Joey Hess2015-12-24
|
* optimise read and write for Keys database (untested)Gravatar Joey Hess2015-12-23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Writes are optimised by queueing up multiple writes when possible. The queue is flushed after the Annex monad action finishes. That makes it happen on program termination, and also whenever a nested Annex monad action finishes. Reads are optimised by checking once (per AnnexState) if the database exists. If the database doesn't exist yet, all reads return mempty. Reads also cause queued writes to be flushed, so reads will always be consistent with writes (as long as they're made inside the same Annex monad). A future optimisation path would be to determine when that's not necessary, which is probably most of the time, and avoid flushing unncessarily. Design notes for this commit: - separate reads from writes - reuse a handle which is left open until program exit or until the MVar goes out of scope (and autoclosed then) - writes are queued - queue is flushed periodically - immediate queue flush before any read - auto-flush queue when database handle is garbage collected - flush queue on exit from Annex monad (Note that this may happen repeatedly for a single database connection; or a connection may be reused for multiple Annex monad actions, possibly even concurrent ones.) - if database does not exist (or is empty) the handle is not opened by reads; reads instead return empty results - writes open the handle if it was not open previously
* allow flushDbQueue to be run repeatedlyGravatar Joey Hess2015-12-23
|
* auto-close database connections when MVar is GCedGravatar Joey Hess2015-12-23
|
* split out Database.Queue from Database.HandleGravatar Joey Hess2015-12-23
| | | | | | Fsck can use the queue for efficiency since it is write-heavy, and only reads a value before writing it. But, the queue is not suited to the Keys database.
* temporarily remove cached keys database connectionGravatar Joey Hess2015-12-16
| | | | | | | | | | | | | | | | | | | | | | The problem is that shutdown is not always called, particularly in the test suite. So, a database connection would be opened, possibly some changes queued, and then not shut down. One way this can happen is when using Annex.eval or Annex.run with a new state. A better fix might be to make both of them call Keys.shutdown (and be sure to do it even if the annex action threw an error). Complication: Sometimes they're run reusing an existing state, so shutting down a database connection could cause problems for other users of that same state. I think this would need a MVar holding the database handle, so it could be emptied once shut down, and another user of the database connection could then start up a new one if it got shut down. But, what if 2 threads were concurrently using the same database handle and one shut it down while the other was writing to it? Urgh. Might have to go that route eventually to get the database access to run fast enough. For now, a quick fix to get the test suite happier, at the expense of speed.
* reorder database shutdown to be concurrency safeGravatar Joey Hess2015-12-16
| | | | | | | | | | | | If a DbHandle is in use by another thread, it could be queueing changes while shutdown is running. So, wait for the worker to finish before flushing the queue, so that any last-minute writes are included. Before this fix, they would be silently dropped. Of course, if the other thread continues to try to use a DbHandle once it's closed, it will block forever as the worker is no longer reading from the jobs MVar. So, that would crash with "thread blocked indefinitely in an MVar operation".
* commentGravatar Joey Hess2015-12-16
|
* add getAssociatedKeyGravatar Joey Hess2015-12-15
| | | | | I guess this is just as efficient as the getAssociatedFiles query, but I have not tried to optimise the database yet.
* use InodeCache when dropping a key to see if a pointer file can be safely resetGravatar Joey Hess2015-12-09
| | | | | | | | | | | | | | | | The Keys database can hold multiple inode caches for a given key. One for the annex object, and one for each pointer file, which may not be hard linked to it. Inode caches for a key are recorded when its content is added to the annex, but only if it has known pointer files. This is to avoid the overhead of maintaining the database when not needed. When the smudge filter outputs a file's content, the inode cache is not updated, because git's smudge interface doesn't let us write the file. So, dropping will fall back to doing an expensive verification then. Ideally, git's interface would be improved, and then the inode cache could be updated then too.
* add inode cache to the dbGravatar Joey Hess2015-12-09
| | | | | | | | | Renamed the db to keys, since it is various info about a Keys. Dropping a key will update its pointer files, as long as their content can be verified to be unmodified. This falls back to checksum verification, but I want it to use an InodeCache of the key, for speed. But, I have not made anything populate that cache yet.
* stash DbHandle in Annex stateGravatar Joey Hess2015-12-09
|
* associated files databaseGravatar Joey Hess2015-12-07
|
* avoid ugly error about MVar if the sqlite worker thread crashesGravatar Joey Hess2015-10-12
|
* fsck: Work around bug in persistent that broke display of problematically ↵Gravatar Joey Hess2015-09-09
| | | | encoded filenames on stderr when using --incremental.
* fsck: Commit incremental fsck database after every 1000 files fscked, or ↵Gravatar Joey Hess2015-07-31
| | | | | | | | every 5 minutes, whichever comes first. Previously, commits were made every 1000 files fscked. Also, improve docs
* use lock pools throughout git-annexGravatar Joey Hess2015-05-19
| | | | | | | | | | | | | The one exception is in Utility.Daemon. As long as a process only daemonizes once, which seems reasonable, and as long as it avoids calling checkDaemon once it's already running as a daemon, the fcntl locking gotchas won't be a problem there. Annex.LockFile has it's own separate lock pool layer, which has been renamed to LockCache. This is a persistent cache of locks that persist until closed. This is not quite done; lockContent stil needs to be converted.
* rejigger imports for clean build with ghc 7.10's AMP changesGravatar Joey Hess2015-05-10
| | | | | The explict import Prelude after import Control.Applicative is a trick to avoid a warning.
* removed all uses of undefined from code baseGravatar Joey Hess2015-04-19
| | | | It's a code smell, can lead to hard to diagnose error messages.
* generated TH uses forallGravatar Joey Hess2015-02-22
|
* avoid closing db handle when reconnecting to do a writeGravatar Joey Hess2015-02-22
|
* complete work around for sqlite SELECT ErrorBusy on new connection bugGravatar Joey Hess2015-02-22
|
* WIPGravatar Joey Hess2015-02-18
|
* more extensions needed by newer version of persistentGravatar Joey Hess2015-02-18
|
* deal with rare SELECT ErrorBusy failuresGravatar Joey Hess2015-02-18
| | | | I think they might be a sqlite bug. In discussions with sqlite devs.
* use WAL mode to ensure read from db always works, even when it's being ↵Gravatar Joey Hess2015-02-18
| | | | | | | | | | | | | | | | written to Also, moved the database to a subdir, as there are multiple files. This seems to work well with concurrent fscks, although they still do redundant work due to the commit granularity. Occasionally two writes will conflict, and one is then deferred and happens later. Except, with 3 concurrent fscks, I got failures: git-annex: user error (SQLite3 returned ErrorBusy while attempting to perform prepare "SELECT \"fscked\".\"key\"\nFROM \"fscked\"\nWHERE \"fscked\".\"key\" = ?\n": database is locked) Argh!!!
* more robust handling of deferred commitsGravatar Joey Hess2015-02-18
| | | | | | | | | | | | Still not robust enough. I have 3 fscks running concurrently, and am seeing: ("commit deferred",user error (SQLite3 returned ErrorBusy while attempting to perform step.)) and git-annex: user error (SQLite3 returned ErrorBusy while attempting to perform prepare "SELECT \"fscked\".\"key\"\nFROM \"fscked\"\nWHERE \"fscked\".\"key\" = ?\n": database is locked)
* fsck: Multiple incremental fscks of different repos (some remote) can now be ↵Gravatar Joey Hess2015-02-17
| | | | in progress at the same time in the same repo without it getting confused about which files have been checked for which remotes.
* allow for concurrent incremental fsck processes again (sorta)Gravatar Joey Hess2015-02-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sqlite doesn't support multiple concurrent writers at all. One of them will fail to write. It's not even possible to have two processes building up separate transactions at the same time. Before using sqlite, incremental fsck could work perfectly well with multiple fsck processes running concurrently. I'd like to keep that working. My partial solution, so far, is to make git-annex buffer writes, and every so often send them all to sqlite at once, in a transaction. So most of the time, nothing is writing to the database. (And if it gets unlucky and a write fails due to a collision with another writer, it can just wait and retry the write later.) This lets multiple processes write to the database successfully. But, for the purposes of concurrent, incremental fsck, it's not ideal. Each process doesn't immediately learn of files that another process has checked. So they'll tend to do redundant work. Only way I can see to improve this is to use some other mechanism for short-term IPC between the fsck processes. Not yet done. ---- Also, make addDb check if an item is in the database already, and not try to re-add it. That fixes an intermittent crash with "SQLite3 returned ErrorConstraint while attempting to perform step." I am not 100% sure why; it only started happening when I moved write buffering into the queue. It seemed to generally happen on the same file each time, so could just be due to multiple files having the same key. However, I doubt my sound repo has many duplicate keys, and I suspect something else is going on. ---- Updated benchmark, with the 1000 item queue: 6m33.808s
* avoid crash when starting fsck --incremental when one is already runningGravatar Joey Hess2015-02-17
| | | | | | | | | | | Turns out sqlite does not like having its database deleted out from underneath it. It might suffice to empty the table, but I would rather start each fsck over with a new database, so I added a lock file, and running incremental fscks use a shared lock. This leaves one concurrency bug left; running two concurrent fsck --more will lead to: "SQLite3 returned ErrorBusy while attempting to perform step." and one or both will fail. This is a concurrent writers problem.
* show error when sqlite crashes worker threadGravatar Joey Hess2015-02-17
| | | | Better than "blocked indefinitely in MVar"..
* avoid fromIntegral overheadGravatar Joey Hess2015-02-16
|
* commit new transaction after 60 secondsGravatar Joey Hess2015-02-16
| | | | | | | | | | | | | | Database.Handle can now be given a CommitPolicy, making it easy to specify transaction granularity. Benchmarking the old git-annex incremental fsck that flips sticky bits to the new that uses sqlite, running in a repo with 37000 annexed files, both from cold cache: old: 6m6.906s new: 6m26.913s This commit was sponsored by TasLUG.
* commit more transactions when fsckingGravatar Joey Hess2015-02-16
| | | | | | This makes interrupt and resume work, robustly. But, incremental fsck is slowed down by all those transactions..
* convert incremental fsck to using sqlite databaseGravatar Joey Hess2015-02-16
Did not keep backwards compat for sticky bit records. An incremental fsck that is already in progress will start over on upgrade to this version. This is not yet ready for merging. The autobuilders need to have sqlite installed. Also, interrupting a fsck --incremental does not commit the database. So, resuming with fsck --more restarts from beginning. Memory: Constant during a fsck of tens of thousands of files. (But, it does seem to buffer whole transation in memory, so may really scale with number of files.) CPU: ?