From f161b5eb596839d54c006a68e875088a9d66c105 Mon Sep 17 00:00:00 2001 From: Joey Hess Date: Mon, 16 Jan 2012 16:28:07 -0400 Subject: Fix data loss bug in directory special remote When moving a file to the remote failed, and partially transferred content was left behind in the directory, re-running the same move would think it succeeded and delete the local copy. I reproduced data loss when moving files to a partition that was almost full. Interrupting a transfer could have similar results. Easily fixed by using a temp file which is then moved atomically into place once the transfer completes. I've audited other calls to copyFileExternal, and other special remote file transfer code; everything else seems to use temp files correctly (rsync, git), or otherwise use atomic transfers (bup, S3). --- Remote/Directory.hs | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) (limited to 'Remote') diff --git a/Remote/Directory.hs b/Remote/Directory.hs index 8ca2a2875..23265dabc 100644 --- a/Remote/Directory.hs +++ b/Remote/Directory.hs @@ -98,11 +98,13 @@ storeEncrypted d (cipher, enck) k = do storeHelper :: FilePath -> Key -> (FilePath -> IO Bool) -> IO Bool storeHelper d key a = do let dest = Prelude.head $ locations d key + let tmpdest = dest ++ ".tmp" let dir = parentDir dest createDirectoryIfMissing True dir allowWrite dir - ok <- a dest + ok <- a tmpdest when ok $ do + renameFile tmpdest dest preventWrite dest preventWrite dir return ok -- cgit v1.2.3