What is the relationship between kernels and virtual environments?

I’m confused (noob) about the relationship between IPython kernels and Python virtual environments (at least as they are used in JupyterLab).

For example, if I activate a virtual environment (in my case typically with workon my_venv) and then

python -m ipykernel install --user --name my_kernel --display-name "Python (my_kernel)"

what is the connection between the kernel my_kernel and the virtual environment my_env?

In my typical workflow, once both the virtual environment and the kernel are set up I

$ workon my_venv
$ [my_venv] jupyter lab

and end and end up in a Juyter session where

  1. the terminal acts as if the my_venv has been activated;
  2. all shell commands issued from any notebook, regardless of associated kernel, act as if my_venv has been activated; and
  3. a notebook has access (only) to packages — e.g., via import — installed in the associated kernel running (which need not be my_venv).

Also, in addition to any kernels I have explcitly defined, I have a (local to my_venv?) kernel called “Python 3 (ipykernel)” which is associated with my_venv.

So my questions are:

  1. is the above a reasonable summary of the relationship between IPython kernels and Python virtual environments? Am I missing something important?
  2. Is it correct that I don’t really ever need to create my own kernel manually for a virtual environment, since one is created automatically, and is exactly the same as the manual one that’s created with the step above (except for name and visibility)?
  3. Is there a way to name the automatically created kernel, and is there a way to customize that step (when and with what tools, e.g., to change its name).

@orome, great question!

Jupyter(Lab/Server/Notebook) “dynamically” creates a kernel(spec) for your current python environment—that kernel type only exists in that virtual environment. This ensures that you always have a python kernel available to you, and that it matches the environment where your current JupyterLab is running.

When you manually create a kernel spec, using the python -m ipykernel install --user ..., you’re creating a static kernel(spec) on disk that can be discovered from other virtual environments. The advantage of this is that you can install JupyterLab in one virtual environment while running kernels from other virtual environments.

Many people use this feature to keep their kernel environments isolated. They might have different kernels for different tasks/workflows.

3 Likes

There’s a bit of a glitch though (not in the tech, but in the simple “kernel-venv” user model) here though with regard to extensions (now) automatically enabled by installing a package into an associated venv: those extensions are not available the kernels, but only to Jupyter Lab instances launched from the activated env.

Hi @Zsailer. Is there a possibility to create the static kernel without installing ipykernel in that particular environment? My problem is that I don’t want anything installed in that env that will not run in prod, which includes ipykernel and its dependencies. It feels like it should be possible (and I tried simply creating the spec manually, but jupyter refuses to load it, probably because the connection file is wrong - and I don’t know how to solve that). Any hints would be greatly appreciated. Thanks

1 Like