Spawn multiple hubs in the same kubernetes cluster using Helm

Sorry to bump this thread, but I am actually looking to do the same thing where I deploy multiple hubs in a single Kubernetes cluster (using EKS on AWS).

@manics, I have run into issues trying to use autoscaling, placeholder pods, and the user scheduler when I have multiple hubs running. Specifically, some pods get assigned to nodes with insufficient memory, and get stuck that way. They seem to get assigned at the same time as the placeholder pods, and then they get stuck. They don’t cancel, and they don’t ever complete.

What is happening here? Why don’t the user scheduler and the autoscaler work? Based on this post, it seems like it should work since cluster resources have the namespace appended. I also can’t see why the placeholders should be a problem since pod priority is cluster wide.

Can you give any sense for what’s going on? I’d be happy to try to fix the problem if I understood it. This is standing in the way of an ideal deployment for us, and I’d love to figure out a way past it.

Thanks!