I have installed the official debian jupyter packages from the debian repositories; also installed the R kernel at the same time.
I have used jupyter notebook successfully on the same computer in the (somewhat distant) past, with several other kernels (bash, gnuplot, markdown, sos). This was probably using debian 12, or possibly debian 11.
debian does not appear to have official trixie packages for bash, gnuplot, markdown or the sos kernels.
when I start jupyter lab, I am presented with icons for available kernels: python, R, bash, gnuplot, markdown, sos.
selecting these kernels one at a time, jupyter lab can find and connect only to the python and R kernels; it fails to connect to the other kernels for which icons are present.
looking at the list of kernels returned from kernelspec list (which I assume might be helpful): [ZB:~] jupyter kernelspec list
Available kernels:
bash /home/n7dr/.local/share/jupyter/kernels/bash
ir /home/n7dr/.local/share/jupyter/kernels/ir
markdown /home/n7dr/.local/share/jupyter/kernels/markdown
gnuplot /usr/local/share/jupyter/kernels/gnuplot
sos /usr/local/share/jupyter/kernels/sos
python3 /usr/share/jupyter/kernels/python3
[ZB:~]
I am guessing that the python and R kernels (i.e., the ones installed from official debian packages) are the only ones that are properly installed, and the others are left over from the very old jupyter installation, and for some reason no longer work.
Given all these facts (I’m happy to provide any more information that might be necessary), what do I need to do to remove the old kernels that jupyter no longer seems to be finding, and to install the bash, gnuplot, markdown and sos kernels in a way that the current jupyter installation will be able to connect to them properly? A simple idiot-proof step-by-step list of commands to execute would be wonderful!
If they don’t work, (sudo) rm -rf (or just mv to some backup location) on the not-working kernels in {various}/share/jupyter/kernels/{broken-kernel} will make them stop appearing.
Once cleaned, the “several other kernels” can be re-installed… however they were installed in the first place. Once installed, their kernelspec folders would need to be discoverable either by copying or symlinking from the {various}/share/jupyter/kernels you are seeing above. If there was complex environment fixturing, this could be… non-trivial, and might require manipulating the kernel.json.
Given the above, and that using the os-level package manager for most jupyter things isn’t particularly recommended (as in, we really can’t help whensudo pip install breaks someone’s OS), I don’t think a “list of scripts” is going to be forthcoming from anyone that isn’t sitting at your computer.
The above will start falling apart when browser assets start to mismatch, or kernel startup need a lot of fixturing, which is entirely possible for deep/niche stacks.
keep the base environment it creates entirely clean, mostly just with conda, mamba and friends
manage as much as possible in a single environment.yml, keeping the “big rocks” like python, notebook, etc. fairly tightly pinned, at least to your desired major/minor versions
use that to set up a separate environment which can be rebuilt on a whim with $CONDA_EXE env update --file environment.yml
I’m trying to figure out what to do, following your detailed – but somewhat difficult for me to understand – response. (I’m just a guy wanting to get this working, and floundering at how difficult it all seems to be nowadays.)
Should I, do you think:
remove the debian packages and all the kernels from the computer.
One the one hand, it seems silly, given that debian has gone to the trouble of providing packages, that I should ignore those packages; but on the other, having installed those packages, it seems that one is basically stuck with no simple, clear., documented way to install other kernels; so starting again from scratch and avoiding the debian packages looks to me like the weird-but-sensible thing to do.
Sure, it’s a known fact that pip, conda, cargo, npm, etc. lack the rigour of a true distribution like debian, which does incredible, principled work emulated in many other places.
No doubt, you could build or request all of the extra kernels as packages for debian, and then work entirely from within those packages.
But the language-based ecosystems, and their cross-platform downstreams like conda-forge, are what Jupyter volunteer maintainers can feasibly impact directly, given the enormous variance of downstream user operating systems, architectures, and then separate release cadences of different lifecycles within them.