Podman(1) not working inside JupyterLab containers that are launched by JupyterHub

Hello Friends:

This question originally appeared on GitHub as: podman(1) not working inside JupyterLab containers that are spawned by JupyterHub (i.e. via DockerSpawner) ….

I have a JupyterLab (LAB) container image that accepts podman(1) commands fine when I spawn the container image manually via the CLI; but on the other hand, experiences podman(1) container-related errors when that same container image is spawned via JupyterHub's DockerSpawner.

Let me show you the issue step-by-step.

Here are JupyterHub and Postgres running (… no issues here, they work 100% perfectly):

root@GUEST# docker ps -a --no-trunc
ID     IMAGE                COMMAND                                    PORTS                    NAMES
61cfb  acme/jupyterhub:1.0  "/opt/jupyterhub.d/usr/bin/jupyterhub.sh"  0.0.0.0:443->443/tcp     jupyterhub
d4793  postgres:latest      "docker-entrypoint.sh postgres"            0.0.0.0:15432->5432/tcp  jupyterhub-db

WORKING CASE

Spawn the JupyterLab container image manually via CLI and issue podman(1) commands. Note that these manual steps do not use the above JupyterHub services.

root@GUEST# docker run \
        -it --name jupyter-janedoe \
        --privileged \
        -v /sys/fs/cgroup:/sys/fs/cgroup:ro \
        -p 18888:8888 acme/jupyterlab-base:1.0 \
        /bin/bash

jovyan@CONTAINER$ sudo dnf -y install podman 
   # This is RedHat's podman(1) utility, which I actually bake into this image when I
   # "docker build [...]" it. I'm just trying to be explicit here. =:)

jovyan@CONTAINER$ podman pull --events-backend=file docker.io/library/mariadb:latest

jovyan@CONTAINER$ podman image ls
REPOSITORY                 TAG     IMAGE ID      CREATED      SIZE
docker.io/library/mariadb  latest  3a348a04a815  3 weeks ago  413 MB

jovyan@CONTAINER$ mkdir -p ${HOME}/data.d/mariadb.d/var/lib/mysql

jovyan@CONTAINER$ podman run --events-backend=file \
      --name mariadb01 \
     -p 3306:3306 \
     -v ${HOME}/data.d/mariadb.d/var/lib/mysql:/var/lib/mysql:Z \
     -e MYSQL_ROOT_PASSWORD=**** \
     -e MYSQL_DATABASE=db01 \
     -e MYSQL_USER=janedoe \
     -e MYSQL_PASSWORD=**** \
     -d docker.io/library/mariadb

jovyan@CONTAINER$ podman ps --no-trunc
ID      IMAGE                     CMD    PORTS                  NAMES
1ac150  docker.io/library/mariadb mysqld 0.0.0.0:3306->3306/tcp mariadb01

jovyan@CONTAINER$ podman run \
      --events-backend=file \
      -it --rm mariadb \
      mysql -e 'show databases' -h 172.17.0.2 -u root -p
Enter password: 
+--------------------+
| Database           |
+--------------------+
| db01               |
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
jovyan@CONTAINER$

Here is some In-CONTAINER O/S information, which we’ll compare with the FAILURE CASE below:

jovyan@CONTAINER$ id
uid=1000(jovyan) gid=100(users) groups=100(users)

jovyan@CONTAINER$ ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
jovyan       1       0  0 00:13 pts/0    00:00:00 tini -g -- /bin/bash
jovyan       6       1  0 00:13 pts/0    00:00:00 /bin/bash
jovyan      40       1  0 00:14 ?        00:00:00 podman
jovyan     307       1  0 00:17 ?        00:00:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/jovyan/.local/sh
jovyan     309       1  0 00:17 pts/0    00:00:00 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --e
jovyan     311       1  0 00:17 pts/0    00:00:00 containers-rootlessport
jovyan     324     311  0 00:17 pts/0    00:00:00 containers-rootlessport-child
jovyan     338       1  0 00:17 ?        00:00:00 /usr/bin/conmon --api-version 1 -c 1ac1502640165637e2b5969a7
100998     349     338  0 00:17 ?        00:00:00 mysqld
jovyan    1000       6  0 00:33 pts/0    00:00:00 ps -ef

jovyan@CONTAINER$ df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         1.8T  187G  1.6T  11% /
tmpfs            64M     0   64M   0% /dev
shm              64M   84K   64M   1% /dev/shm
/dev/sdb1       1.8T  187G  1.6T  11% /etc/hosts
tmpfs           4.0M     0  4.0M   0% /sys/fs/cgroup

jovyan@CONTAINER$ mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/MKCAZS7NCCQUDMYCDUPVVG34TH:/var/lib/d
ocker/overlay2/l/BZPXDZSY7OMH6LSH43ZFMO3SYT:/var/lib/docker/overlay2/l/I65E26L6EXPUXAJ4JVFDEKGIRU:/var/lib/docke
r/overlay2/l/GHJSNVHCXVUL4AG6J4SGTR4MOL:/var/lib/docker/overlay2/l/V5L7C7DUUSCOEJ4R6YYDRUOKEP:/var/lib/docker/ov
erlay2/l/A47I4M26GPBDPYVKCZ22QUNJUE:/var/lib/docker/overlay2/l/UBZECXYTT5SKO5FAUGJYXPDZ76:/var/lib/docker/overla
y2/l/QISGNIRTGUVC2ST5OO7SBTLPEW:/var/lib/docker/overlay2/l/R4KP7BPSJFQF3D2L6YVIX7MWJW:/var/lib/docker/overlay2/l
/ZFVOGFK4FXDMGZLDTVCZP5CXCO:/var/lib/docker/overlay2/l/IQZVKDF2N73EJSRNJ5UFQZLABK:/var/lib/docker/overlay2/l/FMU
QWHAODMEGORO3UKJHQWDVBT,upperdir=/var/lib/docker/overlay2/c177c74b2ecded4eecb6c88f54983fae4affef9f0197aa0e83fa6a
9049731833/diff,workdir=/var/lib/docker/overlay2/c177c74b2ecded4eecb6c88f54983fae4affef9f0197aa0e83fa6a904973183
3/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
/dev/sdb1 on /etc/resolv.conf type ext4 (rw,relatime)
/dev/sdb1 on /etc/hostname type ext4 (rw,relatime)
/dev/sdb1 on /etc/hosts type ext4 (rw,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,size=4096k,nr_inodes=1024,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)

jovyan@CONTAINER$ exit # End the session, which removes the container.
root@GUEST#

FAILURE CASE

Now,… if I perform the exact same sequence, except that I’m logged into that same JupyterLab container spawned via the JupyterHub U.I. instead (that is, spawned via DockerSpawner), I get the following error when I issue the podman run command to launch mariadb:

Error: OCI runtime error: container_linux.go:370: starting container process caused: process_linux.go:326: applying cgroup configuration for process caused: no cgroup mount found in mountinfo

Here is some O/S information from inside this DockerSpawner spawned container (which should be compared with the WORKING CASE):

root@GUEST# docker exec -it jupyter-janedoe /bin/bash # Already spawned. We're just connecting to it.

jovyan@CONTAINER$ id
uid=1000(jovyan) gid=100(users) groups=100(users)

jovyan@CONTAINER$ ps -ef
UID          PID    PPID  C STIME TTY          TIME CMD
jovyan       1       0  0 00:41 ?        00:00:00 tini -g -- start-singleuser.sh --ip=0.0.0.0 --port=8888 --no
jovyan       6       1  0 00:41 ?        00:00:01 /opt/conda/bin/python /opt/conda/bin/jupyterhub-singleuser -
jovyan      54       0  0 00:42 pts/0    00:00:00 /bin/bash
jovyan     111       1  0 00:45 ?        00:00:00 podman
jovyan     413      54  0 00:54 pts/0    00:00:00 ps -ef

jovyan@CONTAINER$ df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         1.8T  187G  1.6T  11% /
tmpfs            64M     0   64M   0% /dev
shm              64M   84K   64M   1% /dev/shm
tmpfs            32G     0   32G   0% /run
tmpfs            32G     0   32G   0% /tmp
tmpfs            32G     0   32G   0% /run/lock
/dev/sdb1       1.8T  187G  1.6T  11% /etc/hosts

jovyan@CONTAINER$ mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/P6F43PPRJY25RJKWD6JQAU6CMG:/var/lib/d
ocker/overlay2/l/BZPXDZSY7OMH6LSH43ZFMO3SYT:/var/lib/docker/overlay2/l/I65E26L6EXPUXAJ4JVFDEKGIRU:/var/lib/docke
r/overlay2/l/GHJSNVHCXVUL4AG6J4SGTR4MOL:/var/lib/docker/overlay2/l/V5L7C7DUUSCOEJ4R6YYDRUOKEP:/var/lib/docker/ov
erlay2/l/A47I4M26GPBDPYVKCZ22QUNJUE:/var/lib/docker/overlay2/l/UBZECXYTT5SKO5FAUGJYXPDZ76:/var/lib/docker/overla
y2/l/QISGNIRTGUVC2ST5OO7SBTLPEW:/var/lib/docker/overlay2/l/R4KP7BPSJFQF3D2L6YVIX7MWJW:/var/lib/docker/overlay2/l
/ZFVOGFK4FXDMGZLDTVCZP5CXCO:/var/lib/docker/overlay2/l/IQZVKDF2N73EJSRNJ5UFQZLABK:/var/lib/docker/overlay2/l/FMU
QWHAODMEGORO3UKJHQWDVBT,upperdir=/var/lib/docker/overlay2/bdd2a5ddab65e7b5e2b45e1854c6a8008a35d89f39aa5b57bdbb52
d2259634bb/diff,workdir=/var/lib/docker/overlay2/bdd2a5ddab65e7b5e2b45e1854c6a8008a35d89f39aa5b57bdbb52d2259634b
b/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime)
/dev/sdb1 on /etc/resolv.conf type ext4 (rw,relatime)
/dev/sdb1 on /etc/hostname type ext4 (rw,relatime)
/dev/sdb1 on /etc/hosts type ext4 (rw,relatime)
/dev/sdb1 on /home/jovyan type ext4 (rw,relatime)
/dev/sdb1 on /sys/fs/cgroup type ext4 (rw,relatime)

jovyan@CONTAINER$ exit
root@GUEST#

One difference you’ll immediately notice is that there are many cgroup / cgroup2mount(5) entries missing in the DockerSpawner failure case.

Finally, I will note that ./jupyterhub_config.py is configured to spawn JupyterLab containers as privileged. Meaning, the below ./jupyterhub_config.py snippet is equivalent to telling DockerSpawner to run the docker container with these options, which I used in the working case:

docker run [ ... ] --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro [ ... ]

root@GUEST# vi .../jupyterhub_config.py
[ ... snip ... ]
c.DockerSpawner.extra_host_config.update({
            "privileged" : True,
            "devices"    : ["/sys/fs/cgroup:/sys/fs/cgroup:ro",],
            "tmpfs"      : {"/tmp":"", "/run":"", "/run/lock":""}, })
[ ... snip ... ]

And via docker inspect jupyter-janedoe, one can see that these settings are in effect (below).

Extracted from output of docker inspect jupyter-janedoe:

[ ... snip ... ]
    "Devices": [
        {
          "PathOnHost": "/sys/fs/cgroup",
          "PathInContainer": "/sys/fs/cgroup",
          "CgroupPermissions": "ro"
        }
]
[ ... snip ... ]

    "Tmpfs": {
               "/run": "",
               "/run/lock": "",
               "/tmp": ""
    }
[ ... snip ... ]

    "Volumes": {
                 "/home/jovyan": {},
                 "/sys/fs/cgroup": {}
    }
[ ... snip ... ]
    "Privileged": true,
[ ... snip ... ]

In summary, I need to understand what, equivalently, is the difference between when I spawn the container manually via the CLI and when DockerSpawner spawns it. In other words, if we reverse-engineered DockerSpawner to come up with the equivalent command that it issues for docker run [ ... ] (yes I know it uses the programmatic API), then we can know why the two O/S environments are not coming up identically as far as privilege (container runtime), cgroup and other O/S environment contexts are concerned. This, then, will allow for tweaking ./jupyterhub_config.py so that I can get podman(1) to work.

Any help is appreciated from the DockerSpawner experts. Thank you in advance! =:)

Is there a reason /sys/fs/cgroup is specified as a device instead of a volume?

Hi @manics

First a quick note before replying to your question: I’m maintaining the Original Post (original question) on GitHub now because I can no longer correct and update (i.e. edit) the Original Post here. See one of my replies to see where those GitHub URLs are.

Now for your question. This is a GREAT question [!] and one that has been gnawing at me too; because I specified that item as a --volume (-v) on the CLI. Honestly, I saw that snippet elsewhere and re-used it, so it may not be 100% correct.

I checked the Docker low-level Python API to see if that entry for /sys/fs/cgroup was correct or incorrect. But it was very late (last night), and I couldn’t reverse-engineer it.

Do you happen to know what the correct c. attribute(s) and syntax for the above /sys/fs/cgroup volume mapping are?

UPDATE1:
I tried variations of the below snippet, but with no luck (same error):

# ./jupyterhub_config.py -- snippet.

# ===========================================================================
c.DockerSpawner.volumes = dict()
c.DockerSpawner.extra_host_config = dict()
[ ... snip ... ]
# ===========================================================================
c.DockerSpawner.extra_host_config.update({
            "privileged" : True,
            "tmpfs"      : {"/tmp":"", "/run":"", "/run/lock":"", "/sys/fs/cgroup":""},
            })

c.DockerSpawner.volumes.update({"/sys/fs/cgroup" : {"bind":"/sys/fs/cgroup", "mode": "ro"}})
# ===========================================================================

More (and/or perhaps something different) might be needed to get this working correctly, and so reverse-engineering what DockerSpawner is doing and needs, is important.

Along those lines, in addition to the in-CONTAINER mount(1), df(1) and ps(1) outputs that I provided in the original post; below are the respective JSON outputs from the docker inspect jupyter-janedoe command, as issued against that image spawned via the docker CLI (see OP for the command), and subsequently against that same image spawned via DockerSpawner:

(Remove the /raw/ to see highlighted JSON).

Of course, the CLI version of the output will not (as expected) have elements related to a JupyterHub login session (e.g. tokens, callback URLs, etc). But that’s okay: What’s necessary is understanding the differences in docker and O/S runtime-environment key/value pairs, so that we can, in turn, understand what ./jupyterhub_config.py requires. Thank you again in advance. :blush:

UPDATE2:

Okay, the following appears to be a material difference in the docker inspect jupyter-janedoe (JSON) output between both cases, which may be contributing to this issue:

docker inspect jupyter-janedoe (launched via CLI):

"HostConfig": {
    "Binds": [
          "/sys/fs/cgroup:/sys/fs/cgroup:ro"
 ],

docker inspect jupyter-janedoe (launched via DockerSpawner):

"HostConfig": {
    "Binds": [
          "jupyterhub-user-janedoe:/home/jovyan:rw"
 ],

The DockerSpawner entry should be a union of both entries (as shown below), but it’s missing the second item:

"HostConfig": {
    "Binds": [
          "jupyterhub-user-janedoe:/home/jovyan:rw",
          "/sys/fs/cgroup:/sys/fs/cgroup:ro"
 ],

This means the code-snippet in my UPDATE1 is incorrect.

So far I’ve verified that updating the c.DockerSpawner.volumes.update attribute is incorrect, because it led to the following at runtime, which isn’t needed (and maybe even ambiguous with respect to what it even means LoL :blush:):

# We want a Volume entry, not Device entry.
"Devices": [{
    "PathOnHost": "/sys/fs/cgroup",
    "PathInContainer": "/sys/fs/cgroup",
    "CgroupPermissions": "ro"
}]

So reverse-engineering a little bit, according to the CLI working case, we want the following /sys/fs/cgroup entries; but only the "Volumes": (middle) entry appears in the DockerSpawner failure case (the outer two entries are missing):

[ ... snip ... ]

"Mounts": [{
   "Type": "bind",
   "Source": "/sys/fs/cgroup",
   "Destination": "/sys/fs/cgroup",
   "Mode": "ro",
   "RW": false,
   "Propagation": "rprivate" }]

[ ... snip ... ]

"Volumes": {
   "/sys/fs/cgroup": {}
}

[ ... snip ... ]

"HostConfig": {
   "Binds": [
     "/sys/fs/cgroup:/sys/fs/cgroup:ro"
],

To achieve this, I’m trying to figure out what the correct attribute(s) is/are to modify, as well as the correct syntax.

In summary, I need a Mounts: entry for /sys/fs/cgroup; as well as to append /sys/fs/cgroup entry to the HostConfig list.

UPDATE3:

Based on this Docker low-level Python API and this DockerSpawner API, I created the following snippets, which I’ll try next week (and may have to tweak). If anyone knows how to do this, comments or suggestions (after reviewing the above) always welcome.

c.DockerSpawner.extra_host_config = dict()

     [ ... snip ... ]

c.DockerSpawner.extra_host_config.update(
   {'/sys/fs/cgroup' :
       {'bind' : '/sys/fs/cgroup',
        'mode' : 'ro' }}
)

--OR--

c.DockerSpawner.extra_host_config.update(
   binds = {'/sys/fs/cgroup' :
              {'bind' : '/sys/fs/cgroup',
               'mode' : 'ro' }}
)

DockerSpawner has a volumes property that should take a volume name or host path:

What happens if you use:

c.DockerSpawner.volumes = {"/sys/fs/cgroup" : {"bind":"/sys/fs/cgroup", "mode": "ro"}}

?

Hi @manics

Sadly, I subsequently tried that, but it didn’t produce the desired result. You can see that same code snippet in my UPDATE1 comment (above). You can also see the result that that produced by viewing the DockerSpawner Pastebin URL that I provided in that same UPDATE1. Just search for /sys/fs/cgroup in that paste.

If you try it and get a different result, please let me know your context and what you implemented.

Meanwhile, I’ll eagerly look at the docs you provided. Thank you.

In your earlier post you were using volumes.update instead of setting it directly. This shouldn’t matter, but it’s worth trying. In addition you’ve snipped some of he config so it’s not possible to see if there’s something that might conflict.

If you still can’t get it working could you please show the full configuration? If it’s in a git repo that allows full reproducibility you might encourage more people to take a look.

@manics Thanks. I actually posted these items in a different post. See the THIS LINK for the files.

Don’t worry that that post focuses on getting systemd(5) working because the three files (./jupyterhub_config.py, ./docker-compose.yml and ./.env) are the same as if I would post them here. Of course I’ve updated jupyterhub_config.py since that post (… updates corresponding to what you’re seeing on this post), but now you have the entire files, top-to-bottom.

P.S. I tried posting this on GitHub (in JupyterHub and in DockerSpawner), but they keep closing them and directing me here.

You were redirected here from the GitHub issues because JupyterHub issues are reserved for bug reports and concrete feature requests, whereas this is a support/configuration problem.

When I suggested posted your file on git I meant a git repository (e.g. GitHub gist) that someone can clone to fetch your files, as this lowers the barrier for someone who wants to reproduce your problem. For a developer git clone is easier than copying and pasting from a forum post.

@manics

In the end, if anyone determines what DockerSpawner attribute and data-structure achieves the equivalent of the following CLI (which results in a bind Mount as depicted above in UPDATE2), please let me know.

docker run -v /sys/fs/cgroup:/sys/fs/cgroup:ro [ ... ]

The base docker-stacks image works for tinkering on a laptop. No special image is required.

P.S. Since the volume attrubute suggestion didn’t work (with and without .update(), another possibility is a code or document bug.

I’ll just chip away at this. Thanks for the help.

SOLUTION (2nd snippet of UPDATE3):

c.DockerSpawner.extra_host_config.update(
   binds = {'/sys/fs/cgroup' :
              {'bind' : '/sys/fs/cgroup',
               'mode' : 'ro' }})

I had already solved the privileged container dilemma; and integrating that solution with the above solution, yields the (final) complete solution:

c.DockerSpawner.extra_host_config.update(
   {"privileged" : True},
   binds = {'/sys/fs/cgroup' :
              {'bind' : '/sys/fs/cgroup',
               'mode' : 'ro' }})
1 Like