Hi!
Deploying JupyterHub on a NVIDIA GPU cluster here.
Is there a way to prevent pods to access GPUs?
Setting the “extra_resource_limits:” to 1 correctly allocate a single GPU replica to the pod, but setting to “0” or leaving it empty would allow the pod to access all GPUs in the system.
Is there something that I am missing on how to prevent pods to get GPUs?
The use case is to allow to run a “GPU-less” container if there is no replica available, running only on CPU.
Self replying - and leaving it there for anyone in the same boat:
this was a wrong configuration in the nvidia container runtime.
/etc/nvidia-container-runtime/config.toml should have the following lines: