Creating a new kernel without installing ipykernel in venv

Hello! I would like to create a kernel without having to install ipykernel in my venv, because it slows down dependency resolution considerably. I tried a couple of hacks, but none worked, and IDEs do not offer this feature (nor VSCode nor JLab IDE: they need to attach to a running jupyter server). Does anyone know a way? How do you feel about this - does anyone think this should be a feature to add to Jupyter? If that is the case, I would like to volunteer for help.

without having to install ipykernel in my venv

Where do you imagine such a kernel would go, and what would it do?

Where do you imagine such a kernel would go[?]

Given my workflow, I would place it under $JUPYTER_DATA_DIR/kernels - I reckon that should be its appropriate place. Given the “global” feature of my question, I would not put it under .venv - though it might work there as well, though it might be more cumbersome for Jupyter to discover so it would not be my ideal choice. In case I chose said kernel and ran it, but the interpreter was not there anymore, I would simply raise an error (“perhaps you deleted the venv?”)

and what would it do?

You mean, the kernel? It would behave like a regular kernel that I create with python -m ipykernel install ...

Let’s make a distinction between

  • the package ipykernel
    • (indirectly) needs a number of python-version-specific packages
  • the kernel.json for an installation of ipykernel
    • points to an absolute path to a specific python interpreter which launches ipykernelauncher
    • is used by other jupyter tools to find kernels
    • can be in a number of jupyter --paths

Provided you have “your virtualenv,” and “some other environment where ipykernel is installed”, there are pretty good docs on patterns for installing globally, or as a user: Installing the IPython kernel — IPython 8.11.0 documentation.

But at some point, if you want the IPython inside the ipykernel to use packages from an environment, it (the ipykernel package) pretty much needs to be installed in that environment. Doing crazy --user installs will eventually break, and the suggestion will end up being, “blow it away, and start over with mamba”.

1 Like

Sorry for the late reply, did not get the notification.

Provided you have “your virtualenv,” and “some other environment where ipykernel is installed”, there are pretty good docs on patterns for installing globally, or as a user: Installing the IPython kernel — IPython 8.11.0 documentation.

Thanks! That’s what I have mostly referred to when working with ipykernel and kernel.json. My workflow is:

  1. I have a global jupyterlab installation
  2. Inside my venv I install ipykernel
  3. With the venv’s python interpreter, I run python -m ipykernel install --user --name <name> --display-name <display name>.
  4. Run jupyter and choose this new kernel.

From my experience/understanding, you have to install ipykernel inside the venv if you want the venv’s packages available inside Jupyter - which I believe it is what you say here:

But at some point, if you want the IPython inside the ipykernel to use packages from an environment, it (the ipykernel package) pretty much needs to be installed in that environment.

IIRC, I never had problems with this workflow, i.e. nothing ever broke. However, there were times where I could not add a ipython/ipykernel to a project’s dependencies to “play around” interactively, or adding it would slow down any subsequent dependency update.

I guess my question can be rephrased in: why does ipykernel need to be installed inside the venv? Couldn’t it be decoupled from the venv and “placed” inside jupyter? In this way, anyone would just need a global jupyter package, and when choosing a Python interpreter Jupyter could check if a kernel already existed or create a new one to pair it with.

In basically all cases, kernels are deeply intertwined with the underlying language interpreter/compiler, and needs access to much of the underlying mechanisms to provide interactive computing features against in-memory objects (completion, inspection, rich display, widgets), in addition to managing their own state on behalf of the user. They then need to put make this information available to “any old client on any old computer”.

One could write another layer that interacted with “any old process on any old computer,” but then would be on the hook for doubly accounting all the interactive pieces. And that package might want to use some more packages to do it. And that would incur additional install complexity. This would probably end up looking a lot like the dependency tree of ipykernel and ipython: something for network access, configuration loading, display formatting, etc.

In between, on a single computer, one could use any existing kernel in any existing client to:

  • start a REPL process
  • manage its lifecycle
  • send input from the user to the process’ stdin
  • send the process’ stdout and stderr to the user

Most languages offer a way to do this, and some kernels offer special syntax for it. Naive unstructured text is a pretty narrow pipe, and getting most of the features that make the kernel experience fun would be not so fun to implement.

Thanks for the clear explanation!