summaryrefslogtreecommitdiff
path: root/doc/todo/export.mdwn
blob: 69f3dd170f6150af4e3cc054d9a1a86f5657d864 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
`git annex export` corresponding to import. This might be useful for eg,
datalad. There are some requests to make eg a S3 bucket mirror the
filenames in the git annex repository with incremental updates, 
which seem out of scope (and there are many tools to do stuff like that
search "deploy files to S3 bucket"), 
but something simpler like `git annex export` could be worth doing.

`git annex export --to remote files` would copy the files to the remote,
using the names in the working tree. For remotes like S3, it could add the
url of the exported file, so that another clone of the repo could use the
exported data.

Would this be able to reuse the existing `storeKey` interface, or would
there need to be a new interface in supported remotes?

--[[Joey]]

Work is in progress. Todo list:

* Compact the export.log to remove old entries.
* `git annex get --from export` works in the repo that exported to it,
  but in another repo, the export db won't be populated, so it won't work.
  Maybe just show a useful error message in this case?  
  However, exporting from one repository and then trying to update the
  export from another repository also doesn't work right, because the
  export database is not populated. So, seems that the export database needs
  to get populated based on the export log in these cases.
* Support export to aditional special remotes (webdav etc)
* Support export in the assistant (when eg setting up a S3 special remote).
  Would need git-annex sync to export to the master tree?
  This is similar to the little-used preferreddir= preferred content
  setting and the "public" repository group.

Low priority:

* When there are two pairs of duplicate files, and the filenames are
  swapped around, the current rename handling renames both dups to a single
  temp file, and so the other file in the pair gets re-uploaded
  unncessarily. This could be improved.

  Perhaps: Find pairs of renames that swap content between two files.
  Run each pair in turn. Then run the current rename code. Although this
  still probably misses cases, where eg, content cycles amoung 3 files, and
  the same content amoung 3 other files. Is there a general algorythm?
* Exporting to box.com via webdav, a rename of a file behaves
  oddly. The rename to the temp file succeeds, but the rename of the temp
  file to the final name fails.
  Also, sometimes the delete of the temp file that's done as a fallback
  fails to actually delete it.
  Hypothesis: Those are done in separate http connections and it might be
  talking to two different backend servers that are out of sync.
  So, making export cache connections might help. Update: No, caching
  connections did not solve it.