I’m having problems using Jupyter notebook in vscode. Everything seems to work fine for a while, but then I always get the error message “Failed to start the kernel” and a log where I find the error “ImportError: /home/~/anaconda3/envs/introduktion/lib/python3.12/lib-dynload/_sqlite3.cpython-312-x86_64-linux-gnu.so: undefined symbol: sqlite3_deserialize…”
I think I have isolated the error to when I’m using an environment with matplotlib installed, and I’ve noticed that that package downgrades sqlite to version 3.31.1.
Could this be the source of the problem, and does anyone have a solution?
I also just noticed that the path " /home/~/anaconda3/…" looks strange… I’m on linux and the “/home/”-part is redundant when using ~ to point to the users home directory, right?
Interactive package management is fairly nightmarish above a certain level of complexity, especially in the case of mixing more than one of PyPI, conda-forge
and the ToS-encumbered defaults
.
As a minimal step, it’s recommended to starting with a fresh environment.yml
and rebuild the env from the ground up with some “guideposts” for the solver, for example:
# environment-introduktion.yml
name: introduktion
channels:
- conda-forge
- nodefaults
dependencies:
- python ==3.11.* # or a suitably-hard pin
- numpy ==1.* # or 2 if feeling feisty
- matplotlib-base # likely fine to leave-unpinned, avoids `qt` from `matplotlib`
# all the stuff
Then:
mamba env upgrade --name introduktion --file environment-introduktion.yml
As a more drastic measure, discussed in many other places:
- fully remove anaconda
- start with a brand-new, unencumbered miniforge installer to get a new
base
(e.g.~/mf
)- never install anything in
base
other than e.g.mamba
,conda
, etc.
- never install anything in
Then start with full-managed, isolated environments.
For an entirely different approach, on can try yet-newer approaches like pixi
which mostly avoid some of the issues caused by heavy global installations. My personal recommendation, however, is to still start with miniforge
, and manage pixi
through normal means there instead of the curl | bash
and pixi selfupdate
techniques.
Thank you so much for your answer @bollwyvl !
Most of it flies above my head right now, but I’ll try to fix that on my own time.
Just to be clear, since you use the phrase “above a certain level of complexity”, all I’ve done is this (from memory, may contain errors):
Create a new environment using anaconda:
conda create -n my_python_environment
Install python and ipykernel in that environment
conda activate my_python_environment
conda install python
conda install ipykernel
Install matplotlib in the same environment (this step downgrades sqlite)
conda install conda-forge::matplotlib
Create project_file.ipynb in vscode, and connect it to my_python_environment
Write some code in the cells.
Everything works fine for a while and then I get the “Failed to start the kernel” message, and I then can’t do anything more with that kernel. Clearing outputs and restarting kernel does nothing. New .ipynb files cannot use that kernel either. I have been able to use pyplot from matplotlib at least a couple of times to do beginner stuff.
As you can probably tell, I don’t really know what I’m doing, but this seems like a really low level of complexity to me…
What I’m really trying to get to grips with is if I’m the problem, or if I should file a bug report somewhere? Am I maybe looking for support in the wrong forum?
I see I forgot to mention in my original post that I’m on Linux Mint 22 on one computer, and I have the same issue on another running Kubuntu 24.04.1.
Well, matplotlib
is complex… you’re pulling compiled code in C, C++, Fortran, etc.
VSCode makes a lot of things complicated for things it doesn’t manage itself. The most fool-proof thing I’ve found is to follow the steps above, activate the env, and then launch code
from there:
mamba env update --name my_python_environment --file environment.yml
mamba activate my_python_environment
code .