Hi,
I am new to AI assited coding and I did set up my JupyterLab environment.
On several videos I saw you can start a new Chat by hitting the “+” button and naming the new chat like “chat 1”, “chat 2”, “chat 3” and so on… below you can see a screenshot I took from deep learning.ai education video:
In my environment (jupyter_ai 2.31.6) however, when I hit the “+”-Button, my current chat will disappear and I start from scratch with no way to access the old chat.
Any ideas? Maybe some access priviliges are missing?
Environment Summary
Setup
- Host: Proxmox (latest stable release)
- Guest OS: Ubuntu 24.04 LTS (headless)
- Access: HTTPS (Nginx reverse proxy with Let’s Encrypt), SSH / xterm.js
- GPU passthrough: NVIDIA GeForce RTX 5060 Ti (16 GB GDDR7) — confirmed working
torch.cuda.is_available() == True
Jupyter (Server / Frontend)
- JupyterLab: 4.4.10 (as shown in WebGUI “About”)
- Jupyter Server: 2.17.0
- Reverse Proxy: Nginx (TLS via Let’s Encrypt)
- Browser Access: HTTPS on public domain
- jupyter_ai 2.31.6
- AI Plugins (frontend): OpenAI Chat (GPT-5), AI Code Interpreter, AI Plot Explorer
Jupyter Kernel / Python Environment
- Python: 3.12.12
- sys.executable: /home/peter/miniconda3/envs/trading/bin/python (example)
- Conda environment: trading
- IPython: 9.6.0
- ipykernel: 6.x
- JupyterLab (pkg in kernel): 4.4.9
- Jupyter Server (pkg in kernel): 2.17.0
- Notebook (pkg): not installed
Data & AI Stack (Kernel)
- PyTorch: 2.9.0 + cu130
- CUDA: 13.0
- Triton: 3.5.0
- pandas: 2.3.3
- numpy: 2.2.6
- matplotlib: 3.10.7
- pandas_ta: not installed
- ccxt: 4.5.11
- yfinance: 0.2.66
GPU Info
- Device: NVIDIA GeForce RTX 5060 Ti
- Driver: 580
- CUDA available: True
- PyTorch CUDA version: 13.0
- Persistence mode: Enabled
- Frameworks used: PyTorch, Triton (GPU kernels)
Notes:
The JupyterLab version shown in the web interface (4.4.10) refers to the server/frontend installation. The jupyterlab version inside the Python kernel (4.4.9) refers to the kernel-side package.
