Jupyterhub for multiple server nodes

Hi. I’m very new to JupyterHub.

I want to set up JupyterHub with multiple server nodes. Let’s say I have 3 server nodes, and each of them has 3 GPU cards installed.

Some features that I need:

  • Only one GPU is allowed for one user session.
  • Automatically balance the user sessions to available nodes.

What would be the solution for me? It’s great if you could also provide instructions to set up, too.

The Spawner is what is responsible fr assigning user servers computational resources. It’s possible that docker swarm and SwarmSpawner will give you what you need with the least effort, but the main task is to have a system for assigning resources (whether an existing one like docker, swarm, kubernetes, or slurm; or your own), and pick or write a Spawner that uses that.

Thank you for your quick response. My current system use slurm, any guideline for that one.

There is batchspawner that you can use on your SLURM cluster so that JupyterHub spawns JupyterLab servers as SLURM jobs.

  • Only one GPU is allowed for one user session.

For this you can configure your SLURM spawner batch script to submit jobs with whatever constraints you have.

  • Automatically balance the user sessions to available nodes.

SLURM will take care of it and nothing to do from JupyterHub side

2 Likes

Thank you @mahendrapaipuri

Now I can submit the job to SLURM. Once I log-in to the JupyterHub interface, I can select the resources pool but the hub cannot redirect to the Jupyter notebook that started on the compute node. It seems like I missed configuring the proxy or connectivity settings, which I could not find where to set up. Could you suggest me for that.

Thank you.

This is a long standing issue on batchspawner. Please install it directly from git using pip install git+https://github.com/jupyterhub/batchspawner.git and it should work.

2 Likes

It’s done

Thank you very much.

1 Like