Docker container (Jupyter Notebook) on JupyterHub not able to use GPU on host system

Hi folks so I have a JupyterHub which spawns docker container. Now I want these run some ML workload which wants to use GPU from the host. I have installed nvidia container toolkit on host.
Below is my config file.

import dockerspawner
import os
from jupyter_client.localinterfaces import public_ips
import subprocess

c = get_config()  # noqa

c.Authenticator.admin_users = {'ross'}

c.DockerSpawner.allowed_images = {
    'Python TensorFlow Notebook':'',
    'Python Pytorch Notebook':''


# we need the hub to listen on all ips when it is in a container

c.JupyterHub.spawner_class = dockerspawner.DockerSpawner

# The docker instances need access to the Hub, so the default loopback port doesn't work:

c.JupyterHub.hub_ip = public_ips()[0]

c.JupyterHub.db_url = '/etc/jupyterhub_workspace/jupyterhub.sqlite'

c.JupyterHub.cookie_secret_file = '/etc/jupyterhub_workspace/jupyterhub_cookie_secret'

def create_dir_hook(spawner):
    username =
    volume_path = os.path.join('/home/jupyter', username)
    if not os.path.exists(volume_path):
        print("Path doesnt exist")["/sbin/mkhomedir_helper",])

c.Spawner.pre_spawn_hook = create_dir_hook

c.Spawner.default_url = '/lab'

notebook_dir = '/home/jovyan/work'

host_dir = '/home/jupyter/{username}/'
c.DockerSpawner.notebook_dir = notebook_dir

# Mount the real user's Docker volume on the host to the notebook user's
# notebook directory in the container
c.DockerSpawner.volumes = {
    f'{host_dir}': notebook_dir


# delete containers when the stop
c.DockerSpawner.remove = True

c.DockerSpawner.extra_create_kwargs = {'user': 'root'}

c.DockerSpawner.extra_host_config = {'runtime': 'nvidia'}

c.DockerSpawner.environment = {
    'GRANT_SUDO': 'yes',
    'CHOWN_HOME': 'yes',
    'CHOWN_EXTRA': '/home/jovyan',
    'CHOWN_HOME_OPTS': '-R',
    'NB_UID': 1000,
    'NB_GID': 1000,

I am able to spawn tensorflow and Pytorch notebooks. But those are not able to use GPU.

Tensorflow Notebook : GPU not detected

Some configs from HOST Machine

docker daemon

cat /etc/docker/daemon.json
    "runtimes": {
        "nvidia": {
            "args": [],
            "path": "nvidia-container-runtime"
> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243

> nvidia-container-runtime --version
NVIDIA Container Runtime version 1.14.6
commit: 5605d191332dcfeea802c4497360d60a65c7887e
spec: 1.2.0

runc version 1.1.12
commit: v1.1.12-0-g51d5e94
spec: 1.0.2-dev
go: go1.21.8
libseccomp: 2.5.1

Can someone please help me out with the same ?


@mahendrapaipuri @manics any suggestions?

I’m on mobile reading this quickly, but search for “capabilities” and “nvidia” in this forum and i think you may find relevant posts.

Edit: SwarmSpawner spawns a GPU-enabled image but - #2 by consideRatio

1 Like

It looks like the cuda versions of those images have cuda in the tag:

1 Like

Thank you very much @markperri . That somehow worked. But , I dont see cuda-latest tag in all images. Like for example in I don’t see any cuda tag. Also Is there any template for creating custom cuda enabled jupyter notebooks ?

cc: @consideRatio @markperri

tensorflow notebook has it, abd pytorch has cuda tags also but versioned to specific cuda major versions. Look among available tags and you’ll see cuda11 cuda12 something i believe.

Only those images has cuda stuff, not base-notebook for example.

1 Like

You can see pytorch here: Quay

Easiest way to create your own notebook image would be to copy all the Nvidia build instructions from one of those existing images.


1 Like

Thank you very much @markperri

copy all the Nvidia build instructions from one of those existing images.

How can i get this ? Can we get Dockerfile for those images on Quay ?

Check docker-stacks repo. Here you will find all necesssary metadata to build container images