Hi Dreamcat4 -
First thing's first - I can't stress enough how awesome it is to know
people are using/talking about my Docker images, blog posts, and so
on. Too cool!
I've responded to your concerns/questions/etc throughout the email
below.
-John
On Wed, Feb 25, 2015 at 11:32:37AM +0000, Dreamcat4 wrote:
> Thank you for moving my message Laurent.
>
> Sorry for the mixup r.e. the mailing lists. I have subscribed to the
> correct list now (for s6 specific).
>
> On Wed, Feb 25, 2015 at 11:30 AM, Laurent Bercot
> <ska-skaware_at_skarnet.org> wrote:
> >
> > (Moving the discussion to the supervision_at_list.skarnet.org list.
> > The original message is quoted below.)
> >
> > Hi Dreamcat4,
> >
> > Thanks for your detailed message. I'm very happy that s6 found an
> > application in docker, and that there's such an interest for it!
> > skaware_at_list.skarnet.org is indeed the right place to reach me and
> > discuss the software I write, but for s6 in particular and process
> > supervisors in general, supervision_at_list.skarnet.org is the better
> > place - it's full of people with process supervision experience.
> >
> > Your message gives a lot of food for thought, and I don't have time
> > right now to give it all the attention it deserves. Tonight or
> > tomorrow, though, I will; and other people on the supervisionlist
> > will certainly have good insights.
> >
> > Cheers!
> >
> > -- Laurent
> >
> >
> >
> > On 25/02/2015 11:55, Dreamcat4 wrote:
> >>
> >> Hello,
> >> Now there is someone (John Regan) who has made s6 images for docker.
> >> And written a blog post about it. Which is a great effort - and the
> >> reason I've come here. But it gives me a taste of wanting more.
> >> Something a bit more foolproof, and simpler, to work specifically
> >> inside of docker.
> >>
> >> From that blog post I get a general impression that s6 has many
> >> advantages. And it may be a good candidate for docker. But I would be
> >> remiss not to ask the developers of s6 themselves not to try to take
> >> some kind of a personal an interest in considering how s6 might best
> >> work inside of docker specifically. I hope that this is the right
> >> mailing list to reach s6 developers / discuss such matters. Is this
> >> the correct mailing list for s6 dev discussions?
> >>
> >> I've read and read around the subject of process supervision inside
> >> docker. Various people explain how or why they use various different
> >> process supervisors in docker (not just s6). None of them really quite
> >> seem ideal. I would like to be wrong about that but nothing has fully
> >> convinced me so far. Perhaps it is a fair criticism to say that I
> >> still have a lot more to learn in regards to process supervisors. But
> >> I have no interest in getting bogged down by that. To me, I already
> >> know more-or-less enough about how docker manages (or rather
> >> mis-manages!) it's container processes to have an opinion about what
> >> is needed, from a docker-sided perspective. And know enough that
> >> docker project itself won't fix these issues. For one thing because of
> >> not owning what's running on the inside of containers. And also
> >> because of their single-process viewpoint take on things. Andy way.
> >> That kind of political nonsense doesn't matter for our discussion. I
> >> just want to have a technical discussion about what is needed, and how
> >> might be the best way to solve the problem!
> >>
> >>
> >> MY CONCERNS ABOUT USING S6 INSIDE OF DOCKER
> >>
> >> In regards of s6 only, currently these are my currently perceived
> >> shortcomings when using it in docker:
> >>
> >> * it's not clear how to pass in programs arguments via CMD and
> >> ENTRYPOINT in docker
> >> - in fact i have not seen ANY docker process supervisor solutions
> >> show how to do this (except perhaps phusion base image)
> >>
To be honest, I just haven't really done that. I usually use
environment variables to setup my services. For example, if I have a
NodeJS service, I'll run something like
`docker run -e NODEJS_SCRIPT="myapp.js" some-nodejs-image`
Then in my NodeJS `run` script, I'd check if that environment variable
is defined and use it as my argument to NodeJS. I'm just making up
this bit of shell code on the fly, it might have syntax errors, but
you should get the idea:
```
if [ -n "$NODEJS_SCRIPT" ]; then
exec node "$NODEJS_SCRIPT"
else
printf "NODEJS_SCRIPT undefined"
touch down
exit 1
fi
```
Another option is to write a script to use as an entrypoint that
handles command arguments, then execs into s6-svcscan.
> >> * it is not clear if ENV vars are preserved. That is also something
> >> essential for docker.
In my experience, they are. If you use s6-svc as your entrypoint (like
I do in my images), then define environment variables via docker's -e
switch they'll be preserved and available in each service's `run` script,
just like in my NodeJS example above.
> >>
> >> * s6 has many utilities s6-*
> >> - not clear which ones are actually required for making a docker
> >> process supervisor
The only *required* programs are the ones in the main s6 and execline
packages.
> >>
> >> * s6 not available yet as .deb or .rpm package
> >> - official packages are helpful because on different distros:
> >> + standard locations where to put config files and so on may
> >> differ.
> >> + to install man pages too, in the right place
> >>
> >> * s6 is not available as official single pre-compiled binary file for
> >> download via wget or curl
> >> - which would be the most ideal way to install it into a docker
> >> container
I can't speak for deb/rpm, but I do have a Docker image I use to
download+compile s6:
https://github.com/jprjr/docker-misc/tree/s6-builder/dockerfiles/arch-s6-builder
I keep the compiled utilities in the "artifacts" folder on GitHub.
That's far from ideal - I'm really bad at keeping those up to date.
> >>
> >>
> >> ^^ Some of these perceived shortcomings are more important /
> >> significant than others! Some are not in the remit of s6 development
> >> to be concerned about. Some are mild nit-picking, or the ignorance of
> >> not-knowning, having not actually tried out s6 before.
> >>
> >> But my general point is that it is not clear-enough to me (from my
> >> perspective) whether s6 can actually satisfy all of the significant
> >> docker-specific considerations. Which I have not properly stated yet.
> >> So here they are listed below…
> >>
> >>
> >> DOCKER-SPECIFIC CONSIDERATIONS FOR A PROCESS SUPERVISOR
> >>
> >> A good process supervisor for docker should ideally:
> >>
> >> * be a single pre-compiled binary program file. That can be downloaded
> >> by curl/wget (or can be installed from .deb or .rpm).
> >>
> >> * can take directly command and arguments. With argv[] like this:
> >> "process_supervisor" "my_program_or_script" "my program or script
> >> arguments…"
Hm. The way I see it, if you're building an image for some service,
and using s6, you're going to be writing a `run` script anyways which
would handle getting arguments setup for your program.
I try and treat my ENTRYPOINT program like /sbin/init on an actual
Linux install. You wouldn't pass arguments like that to /sbin/init,
just arguments for /sbin/init itself.
> >>
> >> * will pass on all ENV vars to "my_program_or_script" faithfully
Done and done.
> >>
> >> * will run as PID 1 inside the linux namespace
> >>
> >> * where my_program_or_script may spawn BOTH child AND non-child
> >> (orphaned) processes
> >>
> >> * when "process_supervisor" (e.g. s6 or whatever) receives a TERM signal
> >> * it faithfully passes that signal to "my_program_or_script"
> >> * it also passes that signal to any orphaned non-child processes too
> >>
> >> * when my_program_or_script dies, or exits
> >> * clean up ALL remaining non-children orphaned processes afterwards
> >> * which share the same linux namespace
> >> - this is VERY important for docker, as docker does not do this
> >>
> >> So to ENSURE these things:
> >>
> >> * to pass full command line arguments and ENV vars to
> >> "my_program_or_script"
> >>
> >> * ensure that when "my_program_or_script" exits, (crashes or normal exit)
> >> * no processes are left running in that linux namespace
> >>
> >> * ensure that when the service reveives a TERM, (docker stop)
> >> * no processes are left running in that linux namespace
> >>
> >> * ensure that when the service reveives a KILL, (docker stop)
> >> * no processes are left running in that linux namespace
s6 handles most of that out of the box. Ensuring that no processes are
running is up to your `finish` script, I believe.
> >>
> >>
> >> BUT in addition, any extra configurability should be entirely optional:
> >>
> >> * to add supplemental run scripts (for support programs like cron and
> >> rsyslog etc)
> >> * which is what most general process supervisors consider as their
> >> main mechanism for starting services
> >>
> >> SO
> >> * if "my_program_or_script" is supplied as an argument, THAT is the
> >> main process running inside the docker container.
> >> * if no "my_program_or_script" argument is supplied, then use
> >> whatever other conventional ways to determine the main process from
> >> the directories of run scripts.
That kind of makes sense, but I'll fill in some gaps/background for
those not familiar with Docker.
Most Docker images just run one process (say nginx). If that process
dies, the container dies.
When you start talking about running process supervisors in
containers, a *lot* of folks get upset about how that isn't "the
Docker way" - because your image doesn't operate the same way as
others. The container won't die, since if a process dies it'll just be
restarted.
In my blog post, I advocated you should pick a "main" process - setup
your container so if some core, fundamental process dies, the
container dies, too. That way, your image still functions like other
existing Docker images - you're not totally breaking the ecosystem.
I think this is what Dreamcat is talking about - he wants to pick some
process as the "if this dies bring everything down with it" process.
In my images, I just make my "main" service's finish script call
s6-svcscanctl -t
> >>
> >>
> >>
> >> SUMMARY
> >>
> >> Current solutions for docker seem to be "too complex" for various
> >> reasons. Such as:
> >>
> >> * no mandatory ssh server (phusion baseimage fails)
> >> * no python dependancy (supervisor fails, and phusion baseimage fails)
> >> * no mandatory cron or rsyslog (current s6 base image fails, and
> >> phusion base image fails)
> >> * no "hundreds of cli tools" - most of which may never be used
> >> (current s6 base image fails)
> >> * no awkward intermediate bootstrap script required to write ENV vars
> >> and cmdline args to a file (runit fails)
> >>
> >> So for whichever of those reasons, it feels the problem of docker
> >> remains un-satisfied. And not fully addressed by any individual
> >> solution. What's the best course of action? How can these problems be
> >> solved into one single tool?
> >>
> >> I am hoping that the person who wrote this page:
> >>
> >> http://skarnet.org/software/s6/why.html
> >>
> >> Might be able to comment on some these above ^^ docker-specific
> >> considerations? If so please, it would really be appreciated.
> >>
> >> Also cc'ing the author of the docker s6 base images. Who perhaps can
> >> comment about some of the problems he has encountered. Many thanks for
> >> any comments again (but please reply on the mailing list).
> >>
> >>
> >> Kind Regards
> >> dreamcat4
> >
> >
Received on Wed Feb 25 2015 - 14:46:35 UTC