Trying to get the continuous image puller, hook image puller and user placeholder to run in two different (GPU and non-GPU) node pools

We want to run CUDA and Tensorflow enabled Single User PODS in a dedicated GPU Node Pool And we want to run all other Single User PODS on a non-GPU node pool.

We created a profileList with kubespawner_overrides for each SingleUser Docker image. The kubespawner_override for each SingleUser Docker image contains a K8s toleration that is tied to a specific node pool’s taint.

That said, we are not able to get the SingleUser PODs scheduled into their respective node pools.

We believe the root cause is that the continuous image puller and hook image pullers are not being scheduled in their respective (GPU and non-GPU) Node pools; and we are not sure how to get them scheduled into both node pools. Would like there to be a continuous image puller and a hook image puller on each node in the non-GPU Node Pool and continuous image puller and a hook image puller

Finally, we would like 2 user placeholders for each node in the GPU Node Pool and 2 user placeholders in the non-GPU Node Pool. We are not able to make that happen either.

Any guidance how to get the continuous image puller, hook image puller and user placeholders scheduled on the nodes in both the GPU node pool and non-GPU Node pool will be of great help

If you haven’t already solved this you can set taints and tolerations for your GPU pods which will for the continuous image puller, hook image puller and user placeholders to be provisioned.

An example would be

preferredDuringSchedulingIgnoredDuringExecution:
 44           - preference:
 45               matchExpressions:
 46               - key:  key1
 47                 operator: In
 48                 values:
 49                 - value1
 50               - key: gpu
 51                 operator: Exists
 52             weight: 100

and correspondingly set tolerations as well.