Prevent pod to get GPUs

Hi!
Deploying JupyterHub on a NVIDIA GPU cluster here.
Is there a way to prevent pods to access GPUs?
Setting the “extra_resource_limits:” to 1 correctly allocate a single GPU replica to the pod, but setting to “0” or leaving it empty would allow the pod to access all GPUs in the system.
Is there something that I am missing on how to prevent pods to get GPUs?

The use case is to allow to run a “GPU-less” container if there is no replica available, running only on CPU.

Self replying - and leaving it there for anyone in the same boat:
this was a wrong configuration in the nvidia container runtime.
/etc/nvidia-container-runtime/config.toml should have the following lines:

accept-nvidia-visible-devices-as-volume-mounts = true
accept-nvidia-visible-devices-envvar-when-unprivileged = false

And then the nvidia plugin should be deployed with

    compatWithCPUManager: true
    deviceListStrategy: volume-mounts

This (apparently) prevent containers to get all GPUs if no allocation specified, and correctly only 1 GPU when assigned with limits.

This is “clearly” documented here [External] Read list of GPU devices from volume mounts instead of NVIDIA_VISIBLE_DEVICES - Google Docs

1 Like