Spawn multiple hubs in the same kubernetes cluster using Helm

Using Terraform to deploy Jupyterhub on GKE. Following the Zero to Jhub on Kubernetes, so I’m using Helm to deploy it, but configured with Terraform. This works great if I’m deploying a single hub in a single cluster. But now, I want to deploy multiple hubs in the same cluster. I believe I can separate this out using namespaces.

Is this as easy as creating another helm_release but using a different namespace?

1 Like

Yes, it should be that easy.

Things to be aware of are cluster wide z2jh settings though, such as autoscaling or the user scheduler:
https://zero-to-jupyterhub.readthedocs.io/en/latest/administrator/optimization.html
You may need to disable those.

1 Like

This sounds like a great place for a JupyterHub module in Terraform! I’m planning on working on that in the next month or two, then testing this question should be pretty easy. Have you had success @aaronstrong?

Sorry to bump this thread, but I am actually looking to do the same thing where I deploy multiple hubs in a single Kubernetes cluster (using EKS on AWS).

@manics, I have run into issues trying to use autoscaling, placeholder pods, and the user scheduler when I have multiple hubs running. Specifically, some pods get assigned to nodes with insufficient memory, and get stuck that way. They seem to get assigned at the same time as the placeholder pods, and then they get stuck. They don’t cancel, and they don’t ever complete.

What is happening here? Why don’t the user scheduler and the autoscaler work? Based on this post, it seems like it should work since cluster resources have the namespace appended. I also can’t see why the placeholders should be a problem since pod priority is cluster wide.

Can you give any sense for what’s going on? I’d be happy to try to fix the problem if I understood it. This is standing in the way of an ideal deployment for us, and I’d love to figure out a way past it.

Thanks!

I afraid I don’t have any experience of using the autoscheduler, all my clusters are a fixed size.

Are you able to share your configs? That might help us figure out your problem. E.g. Does each deployment have it’s own scheduler and placeholders? If so I can imagine they might conflict.

@manics Thanks for the reply! I eventually came to the same conclusion that it was multiple autoschedulers that weren’t communicating. I’m still learning a lot about how scheduling works in Kubernetes, and this was a new realization to me. I think things will work if I just manually create a single autoscheduler with the same policy in the JupyterHub helm chart, and then manually set c.KubeSpawner.scheduler_name to the name of the manually created scheduler.

I think I will likely have to also manually create the stateful set of dummy pods in order to get them to use that scheduler as well (since the helm chart codes in the name of the autoscheduler), but that will actually give me a lot more flexibility in how I create and use the dummy pods.

Thanks again for the reply!

2 Likes

Good luck! This is way outside my area of expertise :grinning:.
Would you mind providing an update, if it works, with details of what you did? I’m sure others on this forum would be interested.

@albertmichaelj Could you please let me know if you eventually solved this issue? Can you now run multiple hubs on a single cluster?

I do run multiple hubs on a single cluster. This required me to pull the scheduler out of the standard jupyterhub chart and create a separate chart for it. I also have my own dummy pods that I’ve had to set up as well. It was a significant amount of work, and it’s highly customized to what I’m specifically doing, unfortunately.

I’d love to try to contribute ways to make the jupyterhub k8s repo more easily useable for multiple hubs in a single cluster, but I have not yet had the time to do so. Sorry I can’t be of more help!