Environment path not affected by kernel

Hi all. I’ve been using Jupyter Notebooks for years (haven’t jumped to JupyterLab consistently yet, but this issue appears to affect it, too). A couple of months ago, I started noticing a new issue: my conda/venv environment paths aren’t altered effectively when I switch kernels in an active notebook server.

I usually use conda to create environments, and the standard python -m ipykernel install --user --name {env_name} to create kernels for each environment. That still works in terms of transferring all the importable packages (the contents of the site-packages dir of the env), but I also use packages that are installed with conda or pip but meant to be invoked from the shell. I usually invoke those with the IPython shell magic like !package_name arg1, etc, and I started noticing that such cells now say “/bin/bash: package: command not found”. I thought it might just be a problem with the line magic, but found that invoking the same command with the subprocess module produces the same result.

I narrowed it down to the Notebook module specifically, I think: if I invoke ipython or jupyter console in a separate terminal and pass the same !package_name command in either, it finds it. Back in the notebook, I find that the contents of sys.executable and sys.path match what I expect from the active kernel, but the results of !which python and !echo $PATH always reflect the environment from which I launched the notebook server initially, and are unaffected by whatever kernel I’ve activated.

I checked the Notebook Github repo, saw the note that active development is all in JupyterLab, so I tried installing that and running it, but I see the same behavior: the path for shell commands is unaffected by the active kernel. I have less experience with JupyterLab than the plain Notebook server, though, so I might be overlooking some step there.

I’m pretty sure this is new behavior; I recall being able to use shell magics and subprocess to invoke external packages before that differ based on the virtual environment. I also can’t find any issue reports or stackoverflow requests mentioning the problem I’m seeing. I usually use conda to manage environments (generally Miniconda, but I’ve tried starting with Anaconda and see it too), but tried it just now with venv equivalent approaches, and find the same issue. I’ve also tried spinning up new AWS EC2 instances to double-check that the problem isn’t affected by some kind of ~/.bashrc edits or something: no improvement. I’m using the Notebook version 6.2.0, but have tried rolling it back to v5.7.6 (arbitrarily chosen), and still see the issue. My OS is Ubuntu; I see it with v20.04 and 18.04.

To reiterate: any importable package like pandas, flask, whatever is reflected accurately based on the active kernel, but any shell-based/CLI packages like gsutil only work if they’re installed in the environment from which I launched the notebook server. I tried digging into the Notebook source code to find where it’s setting the PATH based on the kernel, but couldn’t easily identify where it may be going wrong. Can anyone help me troubleshoot this, please?

What you describe is, to the best of my understanding, what the Jupyter framework provides. That is, no python environment is “activated” during kernel launches. One exception to that is the nb_conda_kernels extension - which discovers environments and essentially fabricates kernelspecs corresponding to each discovered env, such that upon launch that environment is activated. (Don’t hold me to this.)

Might you have been using nb_conda_kernels in the past and, for whatever reason, they’ve been deconfigured (which I believe could happen if configuration files were deleted since they do configure a custom KernelSpecManager)? I also believe these may have been installed by default in Anaconda configurations - not sure if that’s still the case.

nb_conda_kernels can be a bit invasive if you wish to use multiple Jupyter configurations, but perhaps you might try looking at this from that angle?