Hello,
I am a long-time user of Jupyter tools, and I have always enjoyed working on Hub and Lab.
I recently had a dilemma to solve, and I ended up creating a repository for an experiment that, to date, works for me.
The principle is as follows:
- a notebook uses its own virtual environment, there is no way for it to override kernel dependencies
- use of “uv” (automated) to manage the installation cache and avoid duplication, 100 notebook installing torch (versionned) is actually only one torch (versionned) for all these notebooks
Why did I create this?
A kernel, in the Jupyter sense, is a kind of virtual environment, as seen with “venv.” It has its own version of Python and a place to install packages. Except that… I have dozens of notebooks and several potential installations of “torch,” “numpy,” and “pandas,” with specific versions for each project. This required me to create a plethora of kernels, with duplications… The disk space was enormous.
UV avoids these duplications by creating “hard” links by default. The package is located in one place and is linked in the environment’s package directory.
So my project is simple: it’s a Python kernel, but it forces the creation of an environment per notebook and forces “pip” to be “uv pip” (to make usage transparent).
Each notebook therefore has its own environment, and packages are not re-downloaded and copied. UV takes care of managing links, versions, and cleanup if necessary.
Note, however, that for this example, within a container, I am forcing the use of symbolic links (because hard links cannot work with separate volumes).
Source code, container file and explanation are here : Patrice FERLET / Jupyter UV venv · GitLab (edited, it’s now a package in pypi)
I’m using “hub” for now (because I need this having several users), but it may work with “lab” (with maybe some fixes in the kernel to do)
Thanks