summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--doc/bugs/present_files__47__directories_are_dropped_after_a_sync.mdwn38
-rw-r--r--doc/devblog/day_205__incremental.mdwn21
2 files changed, 59 insertions, 0 deletions
diff --git a/doc/bugs/present_files__47__directories_are_dropped_after_a_sync.mdwn b/doc/bugs/present_files__47__directories_are_dropped_after_a_sync.mdwn
new file mode 100644
index 000000000..432ab9050
--- /dev/null
+++ b/doc/bugs/present_files__47__directories_are_dropped_after_a_sync.mdwn
@@ -0,0 +1,38 @@
+### Please describe the problem.
+
+This is a followup from the discussion on https://git-annex.branchable.com/forum/Standard_groups__47__preferred_contents/ where I unfortunately did not get a complete answer.
+I don't know if it is really a bug but at least it does not work as I would expect and the documentation provides no clear discussion on that.
+
+Now to the problem:
+My annex is in "manual" mode (or equivalently "exclude="*" and present" or an expression which contains "present".
+Then I get a file using "git annex get file".
+I would expect that this file is now synced because it is "present".
+But it is not. When I change the file it is synced to the remotes. This is what it should be.
+However, when a remote changes that file, the content is NOT synced, the file is silently dropped.
+
+Similarly, when I get a complete directory tree in manual mode, I would expect that it is synced. That means, when a remote adds a file or changes a file in that directory, it is also synced to the local machine. But it is not. If it is changed, it is silently dropped (as written above). If a file is added, only the metadata is added but the content is not synced.
+
+### What steps will reproduce the problem?
+
+ - Create a file 'file' on the server, git annex add/sync etc.
+ - On the client: git annex wanted here 'exclude="*" and present'
+ - On the client: git annex get file . The file is now present on the client
+ - Change the file on the server, git annex sync
+ - git annex sync --content on the client
+ - Result: File is dropped again on client
+
+Similarly for directories:
+
+ - Create a (sub-)directory 'subdir' with files and sync everything
+ - On the client: git annex get subdir . The subdirectory is now present, all files under it downloaded.
+ - On the server create a new file in 'subdir' and git annex add; git annex sync --content
+ - git annex sync --content on the client
+ - Result: Content of the files is not synced to client
+
+### What version of git-annex are you using? On what operating system?
+
+ git-annex version: 5.20140717-g5a7d4ff
+ build flags: Assistant Webapp Webapp-secure Pairing Testsuite S3 WebDAV DNS Feeds Quvi TDFA CryptoHash
+ key/value backends: SHA256E SHA1E SHA512E SHA224E SHA384E SKEIN256E SKEIN512E SHA256 SHA1 SHA512 SHA224 SHA384 SKEIN256 SKEIN512 WORM URL
+ remote types: git gcrypt S3 bup directory rsync web webdav tahoe glacier ddar hook external
+
diff --git a/doc/devblog/day_205__incremental.mdwn b/doc/devblog/day_205__incremental.mdwn
new file mode 100644
index 000000000..c8535d439
--- /dev/null
+++ b/doc/devblog/day_205__incremental.mdwn
@@ -0,0 +1,21 @@
+Last night, went over the new chunking interface, tightened up exception
+handling, and improved the API so that things like WebDAV will be able to
+reuse a single connection while all of a key's chunks are being downloaded.
+I am pretty happy with the interface now, and except to convert more
+special remotes to use it soon.
+
+Just finished adding a killer feature: Automatic resuming of interrupted
+downloads from chunked remotes. Sort of a poor man's rsync, that while less
+efficient and awesome, is going to work on *every* remote that gets the new
+chunking interface, from S3 to WebDAV, to all of Tobias's external special
+remotes! Even allows for things like starting a download
+from one remote, interrupting, and resuming from another one, and so on.
+
+I had forgotten about resuming while designing the chunking API. Luckily, I
+got the design right anyway. Implementation was almost trivial, and only
+took about 2 hours! (See [[!commit 9d4a766cd7b8e8b0fc7cd27b08249e4161b5380a]])
+
+I'll later add resuming of interrupted uploads. It's not hard to detect
+such uploads with only one extra query of the remote, but in principle,
+it should be possible to do it with no extra overhead, since git-annex
+already checks if all the chunks are there before starting an upload.