Re: process supervisor - considerations for docker

From: Dreamcat4 <dreamcat4_at_gmail.com>
Date: Fri, 27 Feb 2015 10:39:15 +0000

On Thu, Feb 26, 2015 at 11:40 PM, Laurent Bercot
<ska-supervision_at_skarnet.org> wrote:
> On 26/02/2015 21:53, John Regan wrote:
>>
>> Besides, the whole idea here is to make an image that follows best
>> practices, and best practices state we should be using a process
>> supervisor that cleans up orphaned processes and stuff. You should be
>> encouraging people to run their programs, interactively or not, under
>> a supervision tree like s6.
>
>
> The distinction between "process" and "service" is key here, and I
> agree with John.
>
> <long design rant>
> There's a lot of software out there that seems built on the assumption that
> a program should do everything within a single executable, and that
> processes
> that fail to address certain issues are incomplete and the program needs to
> be patched.
>
> Under Unix, this assumption is incorrect. Unix is mostly defined by its
> simple and efficient interprocess communication, so a Unix program is best
> designed as a *set* of processes, with the right communication channels
> between them, and the right control flow between those processes. Using
> Unix primitives the right way allows you to accomplish a task with minimal
> effort by delegating a lot to the operating system.
>
> This is how I design and write software: to take advantage of the design
> of Unix as much as I can, to perform tasks with the lowest possible amount
> of code.
> This requires isolating basic building blocks, and providing those building
> blocks as binaries, with the right interface so users can glue them
> together on the command line.
>
> Take the "syslogd" service. The "rsyslogd" way is to have one executable,
> rsyslogd, that provides the syslogd functionality. The s6 way is to combine
> several tools to implement syslogd; the functionality already exists, even
> if it's not immediately apparent. This command line should do:
>
> pipeline s6-ipcserver-socketbinder /dev/log s6-envuidgid nobody
> s6-applyuidgid -Uz s6-ipcserverd ucspilogd "" s6-envuidgid syslog
> s6-applyuidgid -Uz s6-log /var/log/syslogd
>
> Yes, that's one unique command line. The syslogd implementation will take
> the form of two long-running processes, one listening on /dev/log (the
> syslogd socket) as user nobody, and spawning a short-lived ucspilogd process
> for every connection to syslog; and the other writing the logs to the
> /var/log/syslogd directory as user syslog and performing automatic rotation.
> (You can configure how and where things are logged by writing a real s6-log
> script at the end of the command line.)
>
> Of course, in the real world, you wouldn't write that. First, because s6
> provides some shortcuts for common operations so the real command lines
> would be a tad shorter, and second, because you'd want the long-running
> processes to be supervised, so you'd use the supervision infrastructure
> and write two short run scripts instead.
>
> (And so, to provide syslogd functionality to one client, you'd really have
> 1 s6-svscan process, 2 s6-supervise processes, 1 s6-ipcserverd process,
> 1 ucspilogd process and 1 s6-log process. Yes, 6 processes. This is not as
> insane as it sounds. Processes are not a scarce resource on Unix; the
> scarce resources are RAM and CPU. The s6 processes have been designed to
> take *very* little of those, so the total amount of RAM and CPU they all
> use is still smaller than the amount used by a single rsyslogd process.)
>
> There are good reasons to program this way. Mostly, it amounts to writing
> as little code as possible. If you look at the source code for every single
> command that appears on the insane command line above, you'll find that it's
> pretty short, and short means maintainable - which is the most important
> quality to have in a codebase, especially when there's just one guy
> maintaining it.
> Using high-level languages also reduces the source code's size, but it
> adds the interpreter's or run-time system's overhead, and a forest of
> dependencies. What is then run on the machine is not lightweight by any
> measure. (Plus, most of those languages are total crap.)
>
> Anyway, my point is that it often takes several processes to provide a
> service, and that it's a good thing. This practice should be encouraged.
> So, yes, running a service under a process supervisor is the right design,
> and I'm happy that John, Gorka, Les and other people have figured it out.
>
> s6 itself provides the "process supervision" service not as a single
> executable, but as a set of tools. s6-svscan doesn't do it all, and it's
> by design. It's just another basic building block. Sure, it's a bit special
> because it can run as process 1 and is the root of the supervision tree,
> but that doesn't mean it's a turnkey program - the key lies in how it's
> used together with other s6 and Unix tools.
> That's why starting s6-svscan directly as the entrypoint isn't such a
> good idea. It's much more flexible to run a script as the entrypoint
> that performs a few basic initialization steps then execs into s6-svscan.
> Just like you'd do for a real init. :)
> </long design rant>
>
>
>>
>> Heck, most people don't *care* about this kind of thing because they
>> don't even know. So if you just make /init the ENTRYPOINT, 99% of
>> people will probably never even realize what's happening. If they can
>> run `docker run -ti imagename /bin/sh` and get a working, interactive
>> shell, and the container exits when they type "exit", then they're
>> good to go! Most won't even question what the image is up to, they'll
>> just continue on getting the benefits of s6 without even realizing it.
>
>
> Ideally, that's what would happen. We must ensure that the abstraction
> holds steadily, though - there's nothing worse than a leaky abstraction.
>
>
>>> The main thing I'm concerned about is about preserving proper shell
>>> quoting, because sometimes args can be like --flag='some thing'.
>
>
> This is a solved problem.
> The entrypoint we're talking about is trivial to write in execline,
> and I'll support Gorka, or anyone else, who does that. Since the
> container will already have execline, using it for the entrypoint
> costs nothing, and it makes command line handling and transmission
> utterly trivial: it's exactly what I wrote it for.

Yes Laurent. I would also prefer if we could have Gorka's init
rewritten in "not bash". As bash is not available in all images.
Busybox being one of them. Although I am not against using 'C' or 'go'
either. Since execline doesn't have to be compiled so it sounds easier
to work with.

EDIT: Anyway, Gorka is onboard with execline now. Many thanks Gorka.

In regards to my comments about the fact the you should not prescribe
how people set their entrypoint. Again my reasoning is nothing to do
with 'should' or best practices. The rasons you have stated the
reasons are ones which I mostly agree with!

Let me explain my point by an example:

I am writing an image for tvheadend server. The tvheadend program has
some default arguments, which almost always are:

-u hts -g video -c /config

So then after that we might append user-specifig flags. Which for my
personal use are:

--satip_xml http://192.168.1.22:8080/desc.xml --bindaddr 192.168.2.218

So those user flags become set in CMD. As a user I set from my
orchestration tool, which is crane. In a 'crane.yml' YAML user
configuration file. Then I type 'crane lift', which does the appending
(and overriding) of CMD at the end of 'docker run...'.

Another user comes along. Their user-specific (last arguments) will be
entirely different. And they should naturally use CMD to set them.
This is all something you guys have already stated.

BUT (as an image writer). I don't want them to wipe out (override) the
first "default" part:

-u hts -g video -c /config

Because the user name, group name, and "/config" configuration dir (a
VOLUME). Those choices were all hard-coded and backed into the
tvheadend image's Dockerfile. HOWEVER if some very few people do want
to override it, they can by setting --entrypoint=. For example to
start up an image as a different user.

BUT because they are almost never changed. Then that's why they are
tacked onto the end of entrypoint instead. As that is the best place
to put them. It says every user repeating unnecessarily the same set
of default arguments every time in their CMD part. So as an image
writer of the tvheadend image, the image's default entry point and cmd
are:

ENTRYPOINT ["/tvheadend","-u","hts","-g","video","-c","/config"]
CMD *nothing* (or else could be "--help")

after converting it to use s6 will be:

ENTRYPOINT ["/init", "/tvheadend","-u","hts","-g","video","-c","/config"]
CMD *nothing* (or else could be "--help")

And it that that easy. After making such change then no user of my
tvheadend image will be affected… because users are only meant to
override CMD. And if they choose to override entrypoint (and
accidentally remove '/init') then they are entirely on their own.

What we are providing is a general base image. That is not necessarily
directly going to users. But rather it can be getting used by both
audiences: docker image writers OR users. So there still can be that
other level (or not). We don't have necessarily control over that
aspect.

So to recap:

The part that I agree with both you guys about is that: "/init" should
go at the front of entrypoint. What I don't believe would be correct
to say is to tell people that ENTRYPOINT should be exclusively
reserved for only "/init". Because ultimately it should still remain
their decision how they choose to use entrypoint. And whether they are
an image writer, or a regular docker user we (ase the base image
writer) cannot actually be sure of.

That is why I think it would be a lot easier to use a statement such as:

"just make sure that "/init" is tacked on to the very front (as the
first argument) of whatever your ENTRYPOINT+CMD is".

Since that statement above ^^ it makes universal sense to both image
writers and regular docker users.

It says 100% everything they need to know. It is up to those people to
read up and understand in general what entrypoint and cmd are, rather
then having you educate them about it. As that is not really your
responsibility. There are plenty of general Docker INC documentation
and articles about that subject already.

It is nothing really to be concerned about or get up in arms about.
Like I feel some of my other statements have been unclear or
mis-interpreted that you guys say you 'disagree with'.

Like my previous comments about single-managed-process consideration.
Single managed process is a 'docker specific' consideration. Because
it pretty much occurs only in docker. As multiple managed process
function are already provided by all current popular process
supervisors (including s6). I did not want to have discussion about
that. I wanted to talk about missing features only. Since
multiple-process is already well implemented and not a missing
functionality. Not a 'gap' in the current feature set. Then it is a
general consideration that is true generally and not unique only to
docker. So of course I believe in John's stated 'one service per
container' model 'can be comprised of multiple processes and multiple
managed services'.

Perhaps I sometime assume people understand my intentions too well.
And don't frame those statements properly, or are a bit in-concise
about why the statement.

>
> --
> Laurent
>
Received on Fri Feb 27 2015 - 10:39:15 UTC

This archive was generated by hypermail 2.3.0 : Sun May 09 2021 - 19:44:19 UTC