Miniconda installed on a per user base (separately under the /home//miniconda for all
Cuda 10.1
jupyterhub 1.0.0
4 Tesla P100 GPUs installed on the server
docker version 19.03.2
2.) The Question
How can we let jupyterhub spawn (systemuser-spawner) docker containers with arbitrary GPUs ?
At this time I can spawn a container with all or specific defined GPUs by the keyword in the Dockerfile:
ENV NVIDIA_VISIBLE_DEVICES all
or
ENV NVIDIA_VISIBLE_DEVICES 1, 2
What we want is something else:
At spawn menu, one should be able to choose a container with or without GPU.
If a container with GPU is chosen, the number of GPUs should be defined.
We have 4 GPUs, and want to provide them as first come - first serve.
So an unser can pick 4, 3, 2, or 1 GPU if he is the first.
The next can pick whatever is left, all or some.
How can I create a docker file that is spawnable that receives any nunber of GPUs that are left in my set?
does no one have a clue, if there is a mechanism for spawnable docker containers to pass in any of the available GPUs into the container?
AT this time I know only the possibliity, to pass in a particular GPU, i.e. with its GUID or number. So you always get a fixed GPU. This means, to distribute the GPUs with jupyterhub, I need to build an image for each container with another GPU, to make it spawnable with jupyterhub.
We want to offer a pool of GPUs to the user (students) that may spawn a container in jupyterhub with or without one or even more GPUs and according to the priciple “first come -first serve” a student gets whatever is left in the pool.