Scheduler is not using all nodes... bunching/acclimating to common nodes

We have a 15 node Kubernetes Cluster but we keep running out of resources on a random node as the scheduler is not looking for all available nodes. i.e. there are 4-5 heavily used nodes leaving the others unoccupied. We do not have any tainting set, any guidance or feedback is appreciated.
Mike - mike.alder@gm.com

It sounds like the user scheduler is enabled. See Optimizations — Zero to JupyterHub with Kubernetes documentation for the reasons for it, and how to disable it.

Thank you for your response. We believe that the scheduler is disabled. Do you know the command we can check on the cluster side to see if there is a configuration that would limit node availability? Any other thoughts on why we would have this behavior. Thank you.

You might inspect a user pod to check if it has any pod/node affinity set.