summaryrefslogtreecommitdiff
path: root/doc/design/caching_database.mdwn
blob: a6face00fbae6c9eae64c941b6787256cb969a2d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
* [[metadata]] for views
* [direct mode mappings scale badly with thousands of identical files](/bugs/__34__Adding_4923_files__34___is_really_slow)
* [[bugs/incremental_fsck_should_not_use_sticky_bit]]
* [[todo/wishlist:_pack_metadata_in_direct_mode]]
* [[todo/cache_key_info]]

What do all these have in common? They could all be improved by
using some kind of database to locally store the information in an
efficient way.

The database should only function as a cache. It should be able to be
generated and updated by looking at the git repository.

* Metadata can be updated by looking at the git-annex branch,
  either its current state, or the diff between the old and new versions
* Direct mode mappings can be updated by looking at the current branch,
  to see which files map to which key. Or the diff between the old
  and new versions of the branch.
* Incremental fsck information is not stored in git, but can be
  "regenerated" by running fsck again.  
  (Perhaps doesn't quite fit, but let it slide..)

Store in the database the Ref of the branch that was used to construct it.
(Update in same transaction as cached data.)

## implementation plan

1. Implement for metadata, on a branch, with sqlite.
2. Make sure that builds on all platforms.
3. Add associated file mappings support. This is needed to fully
   use the caching database to construct views.
4. Store incremental fsck info in db.
5. Replace .map files with 3. for direct mode.

## sqlite or not?

sqllite seems like the most likely thing to work. But it does involve ugh,
SQL. And even if that's hidden by a layer like persistent, it's still going
to involve some technical debt (eg, database migrations).

It would be great if there were some haskell thing like acid-state
that I could use instead. But, acid-sate needs to load the whole
DB into memory. In the comments of
[[bugs/incremental_fsck_should_not_use_sticky_bit]] I examined several
other haskell database-like things, and found them all wanting, except for
possibly TCache.

TODO: This seems promising; investigate it:
<https://awelonblue.wordpress.com/2014/12/19/vcache-an-acid-state-killer/>  
It uses LMDB, which is a C library, and its PVar is a variable named by a
bytestring, so it's essentially a key/value store where the values can be
arbitrary Haskell data types. Since git-annex already has Keys, and most
of the need for the database is to look up some cached value for a Key,
this seems like a pretty good fit!

## case study: persistent with sqllite

Here's a non-normalized database schema in persistent's syntax.

<pre>
CachedKey
  key Key
  associatedFiles [FilePath]
  lastFscked Int Maybe
  KeyIndex key

CachedMetaData
  key Key
  metaDataField MetaDataField
  metaDataValue MetaDataValue
</pre>

Using the above database schema and persistent with sqlite, I made
a database containing 30k Cache records. This took 5 seconds to create
and was 7 mb on disk. (Would be rather smaller, if a more packed Key
show/read instance were used.)

Running 1000 separate queries to get 1000 CachedKeys took 0.688s with warm
cache. This was more than halved when all 1000 queries were done inside the
same `runSqlite` call. (Which could be done using a separate thread and some
MVars.)

(Note that if the database is a cache, there is no need to perform migrations
when querying it. My benchmarks skip `runMigration`. Instead, if the query
fails, the database doesn't exist, or uses an incompatable schema, and the
cache can be rebuilt then. This avoids the problem that persistent's migrations
can sometimes fail.)

Doubling the db to 60k scaled linearly in disk and cpu and did not affect
query time.

----

Here's a normalized schema:

<pre>
CachedKey
  key Key
  KeyIndex key
  deriving Show

AssociatedFiles
  keyId CachedKeyId Eq
  associatedFile FilePath
  KeyIdIndex keyId associatedFile
  deriving Show

CachedMetaField
  field MetaField
  FieldIndex field

CachedMetaData
  keyId CachedKeyId Eq
  fieldId CachedMetaFieldId Eq
  metaValue String

LastFscked
  keyId CachedKeyId Eq
  localFscked Int Maybe
</pre>

With this, running 1000 joins to get the associated files of 1000
Keys took 5.6s with warm cache. (When done in the same `runSqlite` call.) Ouch!

Update: This performance was fixed by adding `KeyIdOutdex keyId associatedFile`,
which adds a uniqueness constraint on the tuple of key and associatedFile.
With this, 1000 queries takes 0.406s. Note that persistent is probably not
actually doing a join at the SQL level, so this could be sped up using
eg, esquelito.

Update2: Using esquelito to do a join got this down to 0.250s.

Code: <http://lpaste.net/101141> <http://lpaste.net/101142>

Compare the above with 1000 calls to `associatedFiles`, which is approximately
as fast as just opening and reading 1000 files, so will take well under
0.05s with a **cold** cache.

So, we're looking at nearly an order of magnitude slowdown using sqlite and
persistent for associated files. OTOH, the normalized schema should
perform better when adding an associated file to a key that already has many.

For metadata, the story is much nicer. Querying for 30000 keys that all
have a particular tag in their metadata takes 0.65s. So fast enough to be
used in views.