Hello Friends:
This question originally appeared on GitHub as: podman(1) not working inside JupyterLab containers that are spawned by JupyterHub (i.e. via DockerSpawner) ….
I have a JupyterLab
(LAB) container image that accepts podman(1)
commands fine when I spawn the container image manually via the CLI; but on the other hand, experiences podman(1)
container-related errors when that same container image is spawned via JupyterHub
's DockerSpawner
.
Let me show you the issue step-by-step.
Here are JupyterHub
and Postgres
running (… no issues here, they work 100% perfectly):
root@GUEST# docker ps -a --no-trunc
ID IMAGE COMMAND PORTS NAMES
61cfb acme/jupyterhub:1.0 "/opt/jupyterhub.d/usr/bin/jupyterhub.sh" 0.0.0.0:443->443/tcp jupyterhub
d4793 postgres:latest "docker-entrypoint.sh postgres" 0.0.0.0:15432->5432/tcp jupyterhub-db
WORKING CASE
Spawn the JupyterLab
container image manually via CLI and issue podman(1)
commands. Note that these manual steps do not use the above JupyterHub
services.
root@GUEST# docker run \
-it --name jupyter-janedoe \
--privileged \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-p 18888:8888 acme/jupyterlab-base:1.0 \
/bin/bash
jovyan@CONTAINER$ sudo dnf -y install podman
# This is RedHat's podman(1) utility, which I actually bake into this image when I
# "docker build [...]" it. I'm just trying to be explicit here. =:)
jovyan@CONTAINER$ podman pull --events-backend=file docker.io/library/mariadb:latest
jovyan@CONTAINER$ podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/library/mariadb latest 3a348a04a815 3 weeks ago 413 MB
jovyan@CONTAINER$ mkdir -p ${HOME}/data.d/mariadb.d/var/lib/mysql
jovyan@CONTAINER$ podman run --events-backend=file \
--name mariadb01 \
-p 3306:3306 \
-v ${HOME}/data.d/mariadb.d/var/lib/mysql:/var/lib/mysql:Z \
-e MYSQL_ROOT_PASSWORD=**** \
-e MYSQL_DATABASE=db01 \
-e MYSQL_USER=janedoe \
-e MYSQL_PASSWORD=**** \
-d docker.io/library/mariadb
jovyan@CONTAINER$ podman ps --no-trunc
ID IMAGE CMD PORTS NAMES
1ac150 docker.io/library/mariadb mysqld 0.0.0.0:3306->3306/tcp mariadb01
jovyan@CONTAINER$ podman run \
--events-backend=file \
-it --rm mariadb \
mysql -e 'show databases' -h 172.17.0.2 -u root -p
Enter password:
+--------------------+
| Database |
+--------------------+
| db01 |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
jovyan@CONTAINER$
Here is some In-CONTAINER O/S information, which we’ll compare with the FAILURE CASE below:
jovyan@CONTAINER$ id
uid=1000(jovyan) gid=100(users) groups=100(users)
jovyan@CONTAINER$ ps -ef
UID PID PPID C STIME TTY TIME CMD
jovyan 1 0 0 00:13 pts/0 00:00:00 tini -g -- /bin/bash
jovyan 6 1 0 00:13 pts/0 00:00:00 /bin/bash
jovyan 40 1 0 00:14 ? 00:00:00 podman
jovyan 307 1 0 00:17 ? 00:00:00 /usr/bin/fuse-overlayfs -o lowerdir=/home/jovyan/.local/sh
jovyan 309 1 0 00:17 pts/0 00:00:00 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --e
jovyan 311 1 0 00:17 pts/0 00:00:00 containers-rootlessport
jovyan 324 311 0 00:17 pts/0 00:00:00 containers-rootlessport-child
jovyan 338 1 0 00:17 ? 00:00:00 /usr/bin/conmon --api-version 1 -c 1ac1502640165637e2b5969a7
100998 349 338 0 00:17 ? 00:00:00 mysqld
jovyan 1000 6 0 00:33 pts/0 00:00:00 ps -ef
jovyan@CONTAINER$ df -h
Filesystem Size Used Avail Use% Mounted on
overlay 1.8T 187G 1.6T 11% /
tmpfs 64M 0 64M 0% /dev
shm 64M 84K 64M 1% /dev/shm
/dev/sdb1 1.8T 187G 1.6T 11% /etc/hosts
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
jovyan@CONTAINER$ mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/MKCAZS7NCCQUDMYCDUPVVG34TH:/var/lib/d
ocker/overlay2/l/BZPXDZSY7OMH6LSH43ZFMO3SYT:/var/lib/docker/overlay2/l/I65E26L6EXPUXAJ4JVFDEKGIRU:/var/lib/docke
r/overlay2/l/GHJSNVHCXVUL4AG6J4SGTR4MOL:/var/lib/docker/overlay2/l/V5L7C7DUUSCOEJ4R6YYDRUOKEP:/var/lib/docker/ov
erlay2/l/A47I4M26GPBDPYVKCZ22QUNJUE:/var/lib/docker/overlay2/l/UBZECXYTT5SKO5FAUGJYXPDZ76:/var/lib/docker/overla
y2/l/QISGNIRTGUVC2ST5OO7SBTLPEW:/var/lib/docker/overlay2/l/R4KP7BPSJFQF3D2L6YVIX7MWJW:/var/lib/docker/overlay2/l
/ZFVOGFK4FXDMGZLDTVCZP5CXCO:/var/lib/docker/overlay2/l/IQZVKDF2N73EJSRNJ5UFQZLABK:/var/lib/docker/overlay2/l/FMU
QWHAODMEGORO3UKJHQWDVBT,upperdir=/var/lib/docker/overlay2/c177c74b2ecded4eecb6c88f54983fae4affef9f0197aa0e83fa6a
9049731833/diff,workdir=/var/lib/docker/overlay2/c177c74b2ecded4eecb6c88f54983fae4affef9f0197aa0e83fa6a904973183
3/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
/dev/sdb1 on /etc/resolv.conf type ext4 (rw,relatime)
/dev/sdb1 on /etc/hostname type ext4 (rw,relatime)
/dev/sdb1 on /etc/hosts type ext4 (rw,relatime)
tmpfs on /sys/fs/cgroup type tmpfs (ro,size=4096k,nr_inodes=1024,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
devpts on /dev/console type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
jovyan@CONTAINER$ exit # End the session, which removes the container.
root@GUEST#
FAILURE CASE
Now,… if I perform the exact same sequence, except that I’m logged into that same JupyterLab
container spawned via the JupyterHub
U.I. instead (that is, spawned via DockerSpawner
), I get the following error when I issue the podman run
command to launch mariadb
:
Error: OCI runtime error: container_linux.go:370: starting container process caused: process_linux.go:326: applying cgroup configuration for process caused: no cgroup mount found in mountinfo
Here is some O/S information from inside this DockerSpawner
spawned container (which should be compared with the WORKING CASE):
root@GUEST# docker exec -it jupyter-janedoe /bin/bash # Already spawned. We're just connecting to it.
jovyan@CONTAINER$ id
uid=1000(jovyan) gid=100(users) groups=100(users)
jovyan@CONTAINER$ ps -ef
UID PID PPID C STIME TTY TIME CMD
jovyan 1 0 0 00:41 ? 00:00:00 tini -g -- start-singleuser.sh --ip=0.0.0.0 --port=8888 --no
jovyan 6 1 0 00:41 ? 00:00:01 /opt/conda/bin/python /opt/conda/bin/jupyterhub-singleuser -
jovyan 54 0 0 00:42 pts/0 00:00:00 /bin/bash
jovyan 111 1 0 00:45 ? 00:00:00 podman
jovyan 413 54 0 00:54 pts/0 00:00:00 ps -ef
jovyan@CONTAINER$ df -h
Filesystem Size Used Avail Use% Mounted on
overlay 1.8T 187G 1.6T 11% /
tmpfs 64M 0 64M 0% /dev
shm 64M 84K 64M 1% /dev/shm
tmpfs 32G 0 32G 0% /run
tmpfs 32G 0 32G 0% /tmp
tmpfs 32G 0 32G 0% /run/lock
/dev/sdb1 1.8T 187G 1.6T 11% /etc/hosts
jovyan@CONTAINER$ mount
overlay on / type overlay (rw,relatime,lowerdir=/var/lib/docker/overlay2/l/P6F43PPRJY25RJKWD6JQAU6CMG:/var/lib/d
ocker/overlay2/l/BZPXDZSY7OMH6LSH43ZFMO3SYT:/var/lib/docker/overlay2/l/I65E26L6EXPUXAJ4JVFDEKGIRU:/var/lib/docke
r/overlay2/l/GHJSNVHCXVUL4AG6J4SGTR4MOL:/var/lib/docker/overlay2/l/V5L7C7DUUSCOEJ4R6YYDRUOKEP:/var/lib/docker/ov
erlay2/l/A47I4M26GPBDPYVKCZ22QUNJUE:/var/lib/docker/overlay2/l/UBZECXYTT5SKO5FAUGJYXPDZ76:/var/lib/docker/overla
y2/l/QISGNIRTGUVC2ST5OO7SBTLPEW:/var/lib/docker/overlay2/l/R4KP7BPSJFQF3D2L6YVIX7MWJW:/var/lib/docker/overlay2/l
/ZFVOGFK4FXDMGZLDTVCZP5CXCO:/var/lib/docker/overlay2/l/IQZVKDF2N73EJSRNJ5UFQZLABK:/var/lib/docker/overlay2/l/FMU
QWHAODMEGORO3UKJHQWDVBT,upperdir=/var/lib/docker/overlay2/bdd2a5ddab65e7b5e2b45e1854c6a8008a35d89f39aa5b57bdbb52
d2259634bb/diff,workdir=/var/lib/docker/overlay2/bdd2a5ddab65e7b5e2b45e1854c6a8008a35d89f39aa5b57bdbb52d2259634b
b/work)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev type tmpfs (rw,nosuid,size=65536k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=65536k)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime)
/dev/sdb1 on /etc/resolv.conf type ext4 (rw,relatime)
/dev/sdb1 on /etc/hostname type ext4 (rw,relatime)
/dev/sdb1 on /etc/hosts type ext4 (rw,relatime)
/dev/sdb1 on /home/jovyan type ext4 (rw,relatime)
/dev/sdb1 on /sys/fs/cgroup type ext4 (rw,relatime)
jovyan@CONTAINER$ exit
root@GUEST#
One difference you’ll immediately notice is that there are many cgroup / cgroup2
… mount(5)
entries missing in the DockerSpawner
failure case.
Finally, I will note that ./jupyterhub_config.py
is configured to spawn JupyterLab
containers as privileged
. Meaning, the below ./jupyterhub_config.py
snippet is equivalent to telling DockerSpawner
to run the docker
container with these options, which I used in the working case:
docker run [ ... ] --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro [ ... ]
root@GUEST# vi .../jupyterhub_config.py
[ ... snip ... ]
c.DockerSpawner.extra_host_config.update({
"privileged" : True,
"devices" : ["/sys/fs/cgroup:/sys/fs/cgroup:ro",],
"tmpfs" : {"/tmp":"", "/run":"", "/run/lock":""}, })
[ ... snip ... ]
And via docker inspect jupyter-janedoe
, one can see that these settings are in effect (below).
Extracted from output of docker inspect jupyter-janedoe
:
[ ... snip ... ]
"Devices": [
{
"PathOnHost": "/sys/fs/cgroup",
"PathInContainer": "/sys/fs/cgroup",
"CgroupPermissions": "ro"
}
]
[ ... snip ... ]
"Tmpfs": {
"/run": "",
"/run/lock": "",
"/tmp": ""
}
[ ... snip ... ]
"Volumes": {
"/home/jovyan": {},
"/sys/fs/cgroup": {}
}
[ ... snip ... ]
"Privileged": true,
[ ... snip ... ]
In summary, I need to understand what, equivalently
, is the difference between when I spawn
the container manually via the CLI
and when DockerSpawner
spawns it. In other words, if we reverse-engineered DockerSpawner
to come up with the equivalent
command that it issues for docker run [ ... ]
(yes I know it uses the programmatic API), then we can know why the two O/S environments are not coming up identically as far as privilege
(container runtime), cgroup
and other O/S environment contexts are concerned. This, then, will allow for tweaking ./jupyterhub_config.py
so that I can get podman(1)
to work.
Any help is appreciated from the DockerSpawner
experts. Thank you in advance! =:)