I’m exploring solutions to enable users of our JupyterHub deployment to create and manage multiple virtual environments independently. Our setup is based on Kubernetes v1.28.2, and each user accesses their workspace through Jupyter Lab. The goal is to empower users to create distinct virtual environments for different notebooks directly within their session. For instance, User A logs in, initiates their Jupyter Lab session, and then sets up virtual environment ‘X’ for Notebook X, ‘Y’ for Notebook Y, and so on.
This approach would allow users to tailor their environment based on the specific requirements of each notebook. We aim to utilize the conda env command for creating these environments, ensuring a seamless experience for users to manage their virtual environments without external assistance.
If anyone has implemented a similar solution or knows of alternative methods to achieve this level of user autonomy in environment management, I’d greatly appreciate your insights. Any guidance, including relevant configurations or tools, would be immensely helpful.
Users can do this now in a terminal in JupyterLab by running:
# create the env
# it must have a kernel package in it (`ipykernel` for IPython, or another kernel package if a different language)
conda env create -n my-project-name ipykernel ...other packages...
# register the env as a kernel named 'my-project', available to JupyterLab
conda run -n my-project-name python3 -m ipykernel install --user --name my-project
This is fine in most cases. However, how do you get a custom ipywidget (let’s say GitHub - bqplot/bqplot: Plotting library for IPython/Jupyter notebooks) installed in the my-project-name environment to work correctly in jupyterhub?
The underlying jupyterlab application (the JavaSscript side of the widget) will be installed in /path/to/my-project-name/share/jupyter/labextension and will not be visible to the running jupyterlab because it does not use the same environment as the kernel. As a result, the widget will not be displayed. Is there a way to run a named server with a custom environment to remedy this? Thanks
Yes, this would be possible in your Spawner.cmd by launching a script that does environment activation prior to launching, e.g.
#!/bin/bash -l
eval "$(command conda init shell.bash)"
# can select environment based on e.g. $JUPYTERHUB_SERVER_NAME or your own environment variables in Spawner.environment
conda activate some-environment
exec jupyterhub-singleuser "$@"
The underlying jupyterlab application (the JavaSscript side of the widget) will be installed in /path/to/my-project-name/share/jupyter/labextension and
We should really allow to select these paths easily in JupyterLab somehow. Of course this should be opt-in due to security concerns and would require refreshing JupyterLab after making changes.
The list of prefixes is populated from jupyter_core.paths.jupyter_path(), extendable by setting JUPYTER_PATH env, e.g. export JUPYTER_PATH=/some_env/share/jupyter to add an env.
But it’s quite complex, because any extensions that have both server and frontend components can only ever truly work if they are installed in the same env and the server itself.
At least in the particular case of ipywidgets, there’s one detail that could simplify things: an ipywidget is made up of a javascript extension and a python package.
The javascript code runs exclusively in the frontend and the python code runs exclusively in the kernel (which may use a different conda environment to that of the jupyter-server).
I wonder if a mechanism could be devised to allow the server to retrieve the javascript component from the kernel via a ZMQ socket (rather than in /jupyter_server_env/share/jupyter/) and make it available to the frontend.
This would also allow a single jupyter-server to serve many different kernels containing ipywidgets.