Custom Dockerimage for Jupyterhub on Kubernetes


Been struggling with an issue regarding custom built Dockerimage for Jupyterhub a few days now. Can’t seem to get my head around it on how to fix this.

I’m deploying Jupyterhub to Kubernetes (AKS) via the helm chart. The helm chart contains configuration for user persistent storage through Azure File Share.
Home mount path is set to /home/jovyan.
Custom Dockerimage built by me.

I’ve narrowed down the issue and it seems like an permissions problem for user “Jovyan”. I’ve checked the official images and also added all sufficient steps which fixes the permissions for user jovyan in home directory

However the issue is that when a user is creating a new python3 notebook they are receiving the following error message:
RuntimeError: Permissions assignment failed for secure file: '/home/jovyan/.local/share/jupyter/runtime/kernel-xxxxxxxx.json'. Got '0o677' instead of '0o0600'.

If I enter the container and cd to this specific path: /home/jovyan/.local/share/jupyter/runtime/
Run: ls -l
Output is:
-rwxrwxrwx 1 root root 0 Sep 24 10:23 kernel-xxxxxxxxxxxxxxxxx.json

Which means that root is the owner of this directory and that is clear by the error message pasted above.

To verify how it should look like I deployed the helm chart without our custom image and instead used the official image: jupyterhub/k8s-singleuser-sample
If I then enter the container and cd to the same path as before: /home/jovyan/.local/share/jupyter/runtime/

Run: ls -l
-rw------T 1 jovyan users 263 Sep 24 08:48 kernel-56b3bdfb-7557-4f0a-a541-79989cfe1e5b.json

This is how we want it to look like that the user jovyan is the owner of the directory.

How do I set these permissions correct in the Dockerfile?
What is the correct CMD to run at startup that we define in Dockerfile? Currently it is “jupyterhub-singleuser”

Thanks in advance.

Is your Dockerfile on GitHub or some other public repository? That’ll help people figure out your problem.

My Dockerfile is here:

  • Mybaseimage is based of python:3.6.11-buster where I install a few drivers to different databases.

    FROM mybaseimage
    RUN pip3 install qgrid==1.1.1
    COPY privaterepos /tmp/
    RUN pip3 install /tmp/
    RUN pip3 install jupyterhub==0.9.6 ‘notebook>=5.0,<=6.0’
    RUN jupyter nbextension enable --py --sys-prefix qgrid
    RUN jupyter nbextension enable --py --sys-prefix widgetsnbextension
    RUN useradd -m jovyan
    ENV HOME=/home/jovyan
    USER jovyan
    CMD [“jupyterhub-singleuser”]

I’ve also tried adding a few other steps based from this:

That Dockerfile looked like:

FROM mybaseimage
RUN pip3 install qgrid==1.1.1
COPY privaterepos /tmp/
RUN pip3 install /tmp/
RUN pip3 install jupyterhub==0.9.6 'notebook>=5.0,<=6.0'
RUN jupyter nbextension enable --py --sys-prefix qgrid
RUN jupyter nbextension enable --py --sys-prefix widgetsnbextension

Init arg for user, user id and group id.
ARG NB_USER="jovyan"
ARG NB_UID="1000"
ARG NB_GID="100"

# Fix DL4006
SHELL ["/bin/bash", "-o", "pipefail", "-c"]

# Change to root user
USER root

# Configure environment

# Set home path

# Copy a script that we will use to correct permissions after running certain commands
COPY fix-permissions /usr/local/bin/fix-permissions
RUN chmod a+rx /usr/local/bin/fix-permissions

# Create NB_USER with name jovyan user with UID=1000 and in the 'users' group
# and make sure these dirs are writable by the 'users' group.

RUN useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \
chown $NB_USER:$NB_GID $HOME && \
chmod g+w /etc/passwd && \
fix-permissions $HOME

# Switch to jovyan


RUN mkdir /home/$NB_USER/work && \
fix-permissions /home/$NB_USER
# Fix permissions on /etc/jupyter as root
USER root
RUN fix-permissions /usr/local/etc/jupyter/
# Fix permissions on /etc/jupyter as root and on home/jovyan
RUN fix-permissions /usr/local/etc/jupyter/ && \
fix-permissions /home/$NB_USER
# Set user to jovyan and workdir to /home/jovyan to avoid accidental container runs as root.
# Copy local files
COPY /usr/local/bin/
# Run fix permissions
RUN fix-permissions /home/$NB_USER
# Configure container startup
CMD [""]

The script used above can be found in the repository that I provided a link above.

Neither of these has worked. Both of these images is causing Kernel Error as posted before due to user Jovyan not being owner to /home/jovyan/ path.

I think one issue could be that you try run fix-permissions without being the root user in the Dockerfile some times, and then your script lack permissions to fix the permissions.

I also wonder about .jupyter being a hidden folder. Is it found when using the fix-permissions script? I think so though. Hmmm…

I think what i would do, is to build the docker image, and then when it passes this set of layers (one created per Dockerfile instruction like RUN) and you see it reference it as a certain ID, you do a

docker run -it --rm --entrypoint bash

Then you do the RUN steps in sequence and inspecting the permissions between each for the home folder and the /homr/jovyan/.local folder especially.

1 Like

Thanks for reply.
I entered the different layers and it all looked good with the permissions on /home/jovyan
I also enter the complete image with docker run -it /bin/bash

In directory /home/jovyan
Executed ls -l
Output: drwsrwsr-x 2 jovyan users 4096 Sep 25 11:18 jovyan

Which is correct, jovyan is the owner of this folder at this moment.

However, when I tried to change directory to /home/jovyan/.local/
It returned: cd: .local: No such file or directory

It doesnt seem like that directory is created during the build of this Dockerimage. I guess something is happening when it deploys to kubernetes and mounts the persistent storage as I specified in the helm chart.

Is there any particular setting I must be aware of regarding mounting an persistent azure fileshare as storage?

I’ve deployed a new storage class and in the helm chart I’ve defined to use this particular storage class like this:
homeMountPath: /home/jovyan
storageClass: azurefileshare
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]

Oh, then yes. You need to do a chown after mounting NFS shares sadly.

Initcontainer that also mounts and chowns will be needed :confused:

We managed to fix the issue. The problem was how Azure mounts the fileshare in to the pod. Default for azure storage class seems to be that mountOptions is set to the following:

As found on this link:
We changed the mountOptions to the following:
- dir_mode=0755
- file_mode=0600
- uid=1000
- gid=100
- mfsymlinks

The homeMountPath: /home/jovyan now got mounted by user jovyan. The kernel is now starting and everything is working as expected!

1 Like