Multiple hubs per kubernetes cluster?

Hi all,

I’m looking to support JH for class use at our institution; after a large ‘needs assessment’ it looks like we would have a wide range of classes of various sizes, each with unique desires for shared storage, configurability, and environments. I’m really hoping to use multiple hubs to do this, multi-tenant style, since I’m having good success with Rancher and moderate success with ReadWriteMany shared storage (learning k8s and helm is a blast! but frustrating at times :stuck_out_tongue: )

However, I’m still learning when it comes to the user-scheduling and auto-scaling parts of the JH helm chart. I wonder if anyone foresees any issues with multiple hubs per cluster?

Things I’ve noticed in testing but not yet sure how they’ll affect things later -

  • some of the roles are clusterroles; I wonder if these could be scoped to namespaces, or maybe replaced by a single role for all hubs?

  • many of the resource names aren’t unique across hubs; maybe that’s only an issue for within-namespace duplicates which we can avoid (or could be easily changed?)

  • the biggest concern: the user-scheduling and cluster autoscaling. If there are multiple copies, I wonder if they will they be aware of each other, or result in a situation where nodes don’t drain properly for down-scaling.

Thanks for any insights!
~Shawn

1 Like

I think I found some of the answers I was looking for :slight_smile:

Regarding running multiple hubs per cluster, this is supported, as described in this git commit which appends release name to the names of cluster-scoped resources. But, namespace-scoped resources (which are most of them) don’t include release name, so multiple hubs don’t run in the same namespace.

Regarding autoscaling, multiple hubs should also play nicely, since the user-scheduler packs user servers tightly onto nodes by considering overall node resource usage, independent of what is causing the resource usage (including other hub-owned pods and other k8s workloads). I believe this is what the ClusterRoleBinding is used for, to provide access the cluster-wide kube-scheduler that handles this. I’m not sure if this is possible with reduced permissions in a multi-tenant fashion yet.

1 Like