From 3a7b7c8ac97f263e5fdf6281ef6812aba4af0042 Mon Sep 17 00:00:00 2001 From: Joey Hess Date: Wed, 12 Mar 2014 15:18:43 -0400 Subject: fully fix fsck memory use by iterative fscking Not very well tested, but I'm sure it doesn't eg, loop forever. --- doc/bugs/enormous_fsck_output_OOM.mdwn | 10 ++++++++++ 1 file changed, 10 insertions(+) (limited to 'doc/bugs') diff --git a/doc/bugs/enormous_fsck_output_OOM.mdwn b/doc/bugs/enormous_fsck_output_OOM.mdwn index 975674b5c..b06655354 100644 --- a/doc/bugs/enormous_fsck_output_OOM.mdwn +++ b/doc/bugs/enormous_fsck_output_OOM.mdwn @@ -18,3 +18,13 @@ So I tried to follow your advice here and increase the stack: git-annex: Most RTS options are disabled. Link with -rtsopts to enable them. I wasn't sure what to do next, so any help would be appreciated. + +> Now only 20k problem shas max (more likely 10k) are collected from fsck, +> so it won't use much memory (60 mb or so). If it had to truncate +> shas from fsck, it will re-run fsck after the repair process, +> which should either find no problems left (common when eg when all missing shas +> were able to be fetched from remotes), or find a new set of problem +> shas, which it can feed back through the repair process. +> +> If the repository is very large, this means more work, but it shouldn't +> run out of memory now. [[fixed|done]] --[[Joey]] -- cgit v1.2.3