Conda env listed as a kernel in docker container, but not in a docker service

I’m having to install finicky pycaret and all its dependencies in a separate python 3.8 environment.
I built the image successfully. This is the Dockerfile, based on Jupyter Docker Stacks’ contributed recipes.

FROM jupyter/datascience-notebook:lab-3.2.8

# alternatively, you can comment out the lines above and uncomment those below
# if you'd prefer to use a YAML file present in the docker build context
ARG conda_env=pycaret

# you can add additional libraries you want mamba to install by listing them below the first line and ending with "&& \"
RUN mamba create --yes -p "${CONDA_DIR}/envs/${conda_env}" -c conda-forge \
    -c interpretml \
    python=3.8 ipython ipykernel pip 'pycaret>=2.3.0' \
    opentsne deepchecks great-expectations psycopg2 thriftpy2 \
    shap 'tune-sklearn>=0.2.1' 'ray-tune>=1.0.0' 'hyperopt' 'optuna>=2.2.0' \ 
    'scikit-optimize>=0.8.1' 'psutil' 'catboost>=0.23.2' 'xgboost>=1.1.0' \
    explainerdashboard 'interpret<=0.2.4' evidently autoviz fairlearn \
    fastapi uvicorn fugue>=0.6.5 wandb \
    boto3 azure-storage-blob google-cloud-storage && \
    mamba clean --all -f -y

# the following do not exist in anaconda repos
# m2cgen gradio

# any additional pip installs can be added by uncommenting the following line
RUN "${CONDA_DIR}/envs/${conda_env}/bin/pip" install \
    'prefect>=2.0b' feast nannyml \
    --no-cache-dir

# create Python kernel and link it to jupyter
RUN "${CONDA_DIR}/envs/${conda_env}/bin/python" -m ipykernel install --user --name="${conda_env}" && \
    fix-permissions "${CONDA_DIR}" && \
    fix-permissions "/home/${NB_USER}"

# if you want this environment to be the default one, uncomment the following line:
RUN echo "conda activate ${conda_env}" >> "${HOME}/.bashrc"

# closing instructions
ENV JUPYTER_ENABLE_LAB=yes

# Dask Scheduler & Bokeh ports
EXPOSE 8787
EXPOSE 8786

ENTRYPOINT [ "start-notebook.sh", "--ServerApp.ip=0.0.0.0", "--ServerApp.token=''", "--ServerApp.allow_root=True" ]

If I spin up this image using docker run, I can see the pycaret kernel listed within JupyterLab.
However if I spin up this image as a Docker Swarm service, pycaret is not listed, which beats the point.

I cannot image why?

Docker Service (kernel not listed)

Docker Container standalone (pycaret kernel listed)

This sounds like an issue with how your images are being located by Docker, and unrelated to JupyterLab, E.g. maybe you’ve got multiple versions of the image with the same tag on different nodes?

Everything I’m doing is local. Building the image locally and testing it locally. And it is a tag I have not uploaded to DockerHub nor spun up in a server.

The “problem” was the --user flag in the kernel installation step. When I am spinning a docker service I am specifying a non-root user, hence the kernel doesn’t show up in the docker service.

# create Python kernel and link it to jupyter
RUN "${CONDA_DIR}/envs/${conda_env}/bin/python" -m ipykernel install --user --name="${conda_env}" && \

as it installs for the current user - root in the during docker build - and makes it inaccessible to other arbitrary users. So, removing the --user flag from that command makes it accessible system-wide, solving the problem.

Yes, I do feel stupid for not reading ipykernel install --help sooner.


However, in order to install the kernel system wide - without the --user flag - it is required to run the command as sudo, which I do not have access to in the conda build. So the solution is to spin the docker service and from a JupyterLab terminal run the command

conda_env=your_conda_env_name
"${CONDA_DIR}/envs/${conda_env}/bin/python" -m ipykernel install --name="${conda_env}"

It is manual, but it solves the problem.

Temporarily use root user, run sudo ipykernel install system wide, reset to previous user (1000:100)

USER 0:0
RUN sudo "${CONDA_DIR}/envs/${conda_env}/bin/python" -m ipykernel install --name="${conda_env}" && \
	fix-permissions "${CONDA_DIR}" && \
	fix-permissions "/home/${NB_USER}"
USER 1000:100