Multiple Conda Environments

I am running a TLJH server on AWS. Some of the Jupyter notebooks I am hosting have very different requirements. Rather than adding all of those (potentially conflicting) requirements into the default base environment at /opt/tljh/user, I would like to make multiple environments available, and am hoping users could pick the appropriate environment from the kernel selector within each notebook (like you can do if running notebooks from VS Code).

Is this possible?

If so, how do I recreate the initial base environment used by TLJH? The “What does the installer do?” page says it installs a mambaforge environment, but doesn’t give any details beyond that.

Thanks in advance for your help!

The main thing is that the user environment at /opt/tljh/user is a conda env, so you can use sudo /opt/tljh/user/bin/conda to perform operations that will be shared by users.

Is this possible?

Yes, indeed, you can do this. Once things are set up, you can create environments, e.g.

(e.g. creating new shared envs):

sudo /opt/tljh/user/bin/conda create -n env-name python=3.10 ipykernel [more packages]

To make it a kernel, you need to make sure the ipykernel package is in the env (assuming you are making envs for Python kernels, otherwise you need whatever kernel package you are using).

The next step is to make the kernels in each env available to the user env.

You have two options, generally:

  1. manually, explicitly, for each env:

    sudo /opt/tljh/user/envs/env-name/bin/python3 -m ipykernel install --name "env-name" --display-name "Env Name in UI" --prefix /opt/tljh/user
    

    where:

    • name is the internal key (the directory name) for the kernel spec (should be a short, simple string)
    • display-name is the human-friendly label you see in the UI, e.g.
      Screenshot 2023-11-24 at 11.18.19
    • --prefix is the environment where the kernel should be available (i.e. the default user environment)
  2. use something like nb_conda_kernels to automatically locate any conda environments and create kernelspecs for them, though that is largely unmaintained, and seems to have some compatibility issues at this point.

This process is the same for Python virtualenvs or any other environment installation of your choice. The main thing if you are creating a shared environment is the fileystem permissions - users should have read-execute permissions on the env (this should be the default if you create it with sudo conda create, but may not be for every possible way to create an env).

3 Likes

@minrk Thanks for the detailed solution-- I’ll definitely give it a try! I had gotten as far as creating another environment, but couldn’t figure out how to expose it. Thanks for the tips.

I especially like the option to use other venv’s besides conda-- not a fan. We have custom python modules we install from an internal PyPI, which is not supported by conda. I don’t like having to use a mixed conda/pip installation; seems potentially problematic having two different packages managing dependencies. I prefer to use poetry or pipenv. Hopefully this will solve all my problems! ;^)

1 Like

Yes, you can register any Python installation (with ipykernel installed) as a kernel. The same goes for R, Julia, etc. We often use conda just to get Python itself started, then pip for python packages. That works just fine, but if you have a base Python via other means, there’s no reason you can’t use that as well. It doesn’t need to have any relationship with the user environment, which is where jupyter-server runs, other than placing the kernelspec (which is a .json file and a logo) in that prefix (or system-wide in /usr/local).