On 2015-12-27 01:34, Steve Litt wrote:
> Congrats Laurent: You've suggested a kludge that the King of Kludges,
> Steve Litt, cannot abide by.
Eh, it's not even a kludge. It's a real solution to your problem.
The foobar + foobar/log mechanism from daemontools/runit/s6-svscan,
(as well as the rundeux program from perp, as Georgi points out)
was precisely made to supervise two independent programs connected
by a pipe while keeping the pipe open across restarts: it's *exactly*
what you were asking for.
The fact is that the main intended use, and common real use, of the
feature, is connecting a daemon to a dedicated logger. That much is
true, and that's why daemontools calls the reading service "foobar/log".
But it does not mean you can't use it in other situations, and the
naming is only a convention. As long as you don't need a dedicated
logger for either of your services, using the feature as I described
is perfectly valid.
> But I can't begin to imagine what would be in the "compiled database"
> to which you refer.
Well, if it's for an independent deployment, sure, forget this idea. :)
> proc = subprocess.Popen(['/usr/bin/inotifywait', \
That would have been my next suggestion. Since you're using Python
anyway, you can make it create the pipe itself, and manage both
processes as one.
Note that it is less reliable than using the log pipe from a
supervision suite. Spawning inotifywait as a child in this way
will make its lifetime dependent on amounter.py's, and depends
on it correctly detecting broken pipes on its stdout.
Most programs don't notice that their stdout is broken
until they try writing to it, at which point they die, and the
information they were trying to transmit is lost. If inotifywait
dies *as soon as* amounter.py dies, then it's good news, but
that's the exception rather than the rule; and killing and
restarting amounter.py will still give you a small window during
which inotifywait will not be inotifywaiting.
--
Laurent
Received on Sun Dec 27 2015 - 09:28:57 UTC