Deploying Jupyterhub on GPU

Hi, I’m trying to setup Jupyterhub on GPU based nodes.

Component Login/Storage Node GPU Node 1 GPU Node 2
CPU 1× AMD EPYC 7413 2× AMD EPYC 9354 2× AMD EPYC 7413
RAM 4× 32 GB 12× 48 GB 16× 32 GB
Storage (NVMe SSD) 2 TB NVMe SSD 2 TB NVMe SSD 2 TB NVMe SSD
Storage (HDD/SSD) 8× 16 TB SATA HDD 2× 8 TB NVMe SSD 1× 8 TB SATA SSD
GPU - 8× L40S Ada 48 GB 8× L4 Ada 24 GB

How do I set up JupyterHub using Kubernetes and Docker so that each user receives 2 CPU cores and 2 GB of memory whenever they log in? Also, is it possible to deploy JupyterHub on an on-premises cluster rather than in a High-Performance Computing (HPC) environment.

You can deploy jupyterhub on a small k8s cluster, the complexity is that you need a k8s cluster with the ability for a pod to access storage no matter where its running and that networking works etc - which is k8s native complexity rather than jupyterhub complexity.

I’ve run jupyterhub on a k8s cluster running on 8 raspberry pi computers. It worked, but i didnt get good user server home folder storage access performance that was good.

I used k3s at that point in time to setup k8s, and NFS to provide storage.

1 Like