Spawn multiple hubs in the same kubernetes cluster using Helm

Using Terraform to deploy Jupyterhub on GKE. Following the Zero to Jhub on Kubernetes, so I’m using Helm to deploy it, but configured with Terraform. This works great if I’m deploying a single hub in a single cluster. But now, I want to deploy multiple hubs in the same cluster. I believe I can separate this out using namespaces.

Is this as easy as creating another helm_release but using a different namespace?

1 Like

Yes, it should be that easy.

Things to be aware of are cluster wide z2jh settings though, such as autoscaling or the user scheduler:
You may need to disable those.

1 Like

This sounds like a great place for a JupyterHub module in Terraform! I’m planning on working on that in the next month or two, then testing this question should be pretty easy. Have you had success @aaronstrong?

Sorry to bump this thread, but I am actually looking to do the same thing where I deploy multiple hubs in a single Kubernetes cluster (using EKS on AWS).

@manics, I have run into issues trying to use autoscaling, placeholder pods, and the user scheduler when I have multiple hubs running. Specifically, some pods get assigned to nodes with insufficient memory, and get stuck that way. They seem to get assigned at the same time as the placeholder pods, and then they get stuck. They don’t cancel, and they don’t ever complete.

What is happening here? Why don’t the user scheduler and the autoscaler work? Based on this post, it seems like it should work since cluster resources have the namespace appended. I also can’t see why the placeholders should be a problem since pod priority is cluster wide.

Can you give any sense for what’s going on? I’d be happy to try to fix the problem if I understood it. This is standing in the way of an ideal deployment for us, and I’d love to figure out a way past it.


I afraid I don’t have any experience of using the autoscheduler, all my clusters are a fixed size.

Are you able to share your configs? That might help us figure out your problem. E.g. Does each deployment have it’s own scheduler and placeholders? If so I can imagine they might conflict.

@manics Thanks for the reply! I eventually came to the same conclusion that it was multiple autoschedulers that weren’t communicating. I’m still learning a lot about how scheduling works in Kubernetes, and this was a new realization to me. I think things will work if I just manually create a single autoscheduler with the same policy in the JupyterHub helm chart, and then manually set c.KubeSpawner.scheduler_name to the name of the manually created scheduler.

I think I will likely have to also manually create the stateful set of dummy pods in order to get them to use that scheduler as well (since the helm chart codes in the name of the autoscheduler), but that will actually give me a lot more flexibility in how I create and use the dummy pods.

Thanks again for the reply!


Good luck! This is way outside my area of expertise :grinning:.
Would you mind providing an update, if it works, with details of what you did? I’m sure others on this forum would be interested.