Singleuser GPU limits

This might be because of me not understanding how singleuser works (or K8s scheduling for that part…), but I exhaust the limits once I start a single server:

My config looks like this:

      storageClass: openebs-zfspv
  nodeSelector: worker
    name: my-singleuser-gpu-image
    tag: v1.5.0
    limits: 1

nvidia-smi within a notebook for the first user shows only one of the eight GPUs, so it should not be using all of them.

Now my resources for the node look OK:

  cpu:                256
  ephemeral-storage:  397152651836
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             2101193248Ki     8
  pods:               110


Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests         Limits
  --------           --------         ------
  cpu                100m (0%)        100m (0%)
  memory             1126170624 (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)           0 (0%)
  hugepages-1Gi      0 (0%)           0 (0%)
  hugepages-2Mi      0 (0%)           0 (0%)     1                1

Does anyone have an idea what I am missing here?

Restarted the master node after reading this:

Works now, so not related to Jupyter.