| Commit message (Collapse) | Author | Age |
| |
|
|
|
|
|
| |
getLine in waitForTermination doesn't work when stdin is closed..
Just loop forever, there was no reason to getLine here I think.
|
|
|
|
|
|
|
|
| |
5.20140707).
Reversion introduced in 89c188fac37c20c40b0a9dabeb35403cfa4e4f52.
The locking code was wrong; the webapp re-ran itself, saw pid was locked,
and so didn't start!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
fix the tip of the iceberg)
Configure crashed on systems with that process and without eg, sha256sum.
The rest of the code in configure looks to work ok, since it uses sh -c to
probe for commands, and sh is always in path so it works.
Dunno about all the rest of git-annex. Not a huge amount of external
program use, other than git, so perhaps this won't be a large pain.
Note that boolSystem can throw an exception now if the program doesn't
exist. Could easily be changed back to False.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Minor because normally only 1 FD is leaked per git-annex run. However,
the test suite leaks a few hundred FDs, and this broke it on the Debian
autobuilders, which seem to have a tigher than usual ulimit.
The leak was introduced by the lazy getDirectoryContents' that was
introduced in b54de1dad4874b7561d2c5a345954b6b5c594078 in order to scale to
millions of journal files -- if the lazy list was never fully consumed, the
directory handle did not get closed.
Instead, pull in openDirectory/readDirectory/closeDirectory code that I
already developed and submitted in a patch to the haskell directory library
earlier. Using this in journalDirty avoids the place that the lazy list
caused a problem. And using it in stageJournal eliminates the need for
getDirectoryContents'.
The getJournalFiles* functions are switched back to using the regular
strict getDirectoryContents. I'm not sure if those always consume the whole
list, so this avoids any leak. And the things that call those are things
like git annex unused, which also look at every file committed to the
git-annex branch, so would need more work to scale to insane numbers of
files anyway.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Yes, this means that git annex webapp on windows execs git-annex, which
execs itself to set env, and the execs itself again to redirect logs.
This is disgusting. This is Windows(TM).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than calculating the TSDelta once, and caching it, this now
reads the inode sential file's InodeCache file once, and then each time a
new InodeCache is generated, looks at the sentinal file to get the current
delta.
This way, if the time zone changes while git-annex is running, it will
adapt.
This adds some inneffiency, but only on Windows, and only 1 stat per new
file added. The worst innefficiency is that `git annex status` and
`git annex sync` will now (on Windows) stat the inode sentinal file once per
file in the repo.
It would be more efficient to use getCurrentTimeZone, rather than needing
to stat the sentinal file. This should be easy to do, once the time
package gets my bugfix patch.
This commit was sponsored by Jürgen Lüters.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On Windows, changing the time zone causes the apparent mtime of files to
change. This confuses git-annex, which natually thinks this means the files
have actually been modified (since THAT'S WHAT A MTIME IS FOR, BILL <sheesh>).
Work around this stupidity, by using the inode sentinal file to detect if
the timezone has changed, and calculate a TSDelta, which will be applied
when generating InodeCaches.
This should add no overhead at all on unix. Indeed, I sped up a few
things slightly in the refactoring.
Seems to basically work! But it has a big known problem:
If the timezone changes while the assistant (or a long-running command)
runs, it won't notice, since it only checks the inode cache once, and
so will use the old delta for all new inode caches it generates for new
files it's added. Which will result in them seeming changed the next time
it runs.
This commit was sponsored by Vincent Demeester.
|
| |
|
|
|
|
|
|
|
|
|
| |
Deal with FAT's low resolution timestamps, which in combination with
Linux's caching of higher res timestamps while a FAT is mounted, caused
direct mode repositories on FAT to seem to have modified files after they
were unmounted and remounted.
This commit was sponsored by Fabrice Rossi.
|
|
|
|
|
|
|
|
|
|
| |
This version of wai changed the type of Middleware, so I cannot seem
to liftIO inside it. So, got rid of a lot of not really needed
complexity to use System.Log.Logger's logging stuff, and just use
the standard wai stdout logger when debug logging is enabled.
Format may change some, and it logs http to stdout instead of stderr
now. Doesn't matter for the webapp since both go to the same log anyway.
|
|
|
|
| |
update code to avoid cwd and env redefinition warnings
|
|
|
|
| |
the path.
|
|
|
|
| |
importing files to a disk that is full.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
This should work even back in debian stable's warp
|
|
|
|
|
| |
Char8 often indicates an encoding bug. It didn't here, but I can avoid it
and not worry about it.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
This avoids ssh prompting for passwords on stdin, ever.
It may also change other behavior of other programs, as there is no
controlling terminal now. However, setsid was already done when running the
assistant in daemon mode, so any behavior changes should not be really new.
|
|
|
|
|
| |
Omitted a couple of files what have had significant contributions from
others.
|
|\
| |
| |
| |
| | |
Conflicts:
debian/changelog
|
| | |
|
| | |
|
|/ |
|
|
|
|
| |
Use cabal include file
|
|
|
|
| |
Including avoiding needing cabal's defines for Utility.URI
|
|\
| |
| |
| |
| | |
Conflicts:
doc/devblog/day_152__more_ssh_connection_caching.mdwn
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pulls.
For sync, saves 1 ssh connection per remote. For remotedaemon, the same
ssh connection that is already open to run git-annex-shell notifychanges
is reused to pull from the remote.
Only potential problem is that this also enables connection caching
when the assistant syncs with a ssh remote. Including the sync it does
when a network connection has just come up. In that case, cached ssh
connections are likely to be stale, and so using them would hang.
Until I'm sure such problems have been dealt with, this commit needs to
stay on the remotecontrol branch, and not be merged to master.
This commit was sponsored by Alexandre Dupas.
|
|
|
|
| |
Data.Time.Calendar
|
|
|
|
| |
Avoid back-to-back runs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Code was still buggy, it turns out (though the recursion checker caught
it). In the case of (Schedule (Monthly Nothing) AnyTime), where the last
run was on yyyy-12-31, it looped forever.
Also, the handling of (Schedule (Yearly Nothing) AnyTime) was wacky where
the last run was yyyy-12-31. It would suggest a window starting on the 3rd
for the next run (because 31 mod 28 is 3).
I think that originally I was wanted to avoid running on 01-01 if it had
just run on 12-31. But the code didn't accomplish this, and it's not
necessary anyway. This is supposed to calculate the next window meeting the
schedule, and for (Schedule (Monthly Nothing), the window starts at 01-01
and runs through 01-31. If that causes two back-to-back runs, well the next
one will not be until 02-01 at the earliest.
Also, back-to-back runs can be avoided, if desired, by using Divisible 2.
|
|
|
|
|
|
|
|
| |
candidate cn be found in next hundred years
Note that the exception thrown is not visible in the webapp currently
because it crashes one of Cronner's 2 worker threads, which is never
checked.
|
|
|
|
|
|
|
| |
This is supposed to look for a day past the last day it ran, not a month
past.
Seems to work, at least in anarcat's test case.
|
| |
|
|
|
|
| |
and the last time the job ran was a day of the month > 12. This caused a runaway loop. Thanks to Anarcat for his assistance, and to Maximiliano Curia for identifying the cause of this bug.
|
|
|
|
| |
Closes: #744148
|
| |
|
| |
|
|
|
|
| |
implementation
|
| |
|