Limit namespace memory (actual) usage


How can I limit the jupyterhub namespace resources without having reserved but not used resources?

The instructions described here are very clear but seems to not be compatible with limiting the whole jupyterhub platform resources on Kubernetes, using a ResourceQuota on the namespace.

Suppose we have a cluster with 200GB of memory and lets say we want to use 100GB for jupyterhub, and we want to limit maximum memory usage for every user to 12GB. (users be using avg 4GB memory)

If we create a ResourceQuota with limits.memory = 100GB for that namespace, kubernetes will not spawn a new single-user pods if the sum of all the resource.limits.memory value of all containers in the namespace + the resource.limits.memory value of the pod is greater than 100GB.

The problem is that the calculation for deciding if kubernetes spawns or not a pod depends on the maximum memory limit of the pods NOT the actual memory usage. So seems not possible to limit both the per user memory maximum usage AND the whole jupyterhub namespace memory usage without having “reserved but not used” RAM.

Any ideas or workaround for this?

Take into account that for creating a ResourceQuota with memory limit, all containers to be deployed on that namespace must set the resource.memory.limit value.


This would need to be implemented on the k8s side, but I can’t think of a way- what happens if you hit the namespace memory limit but none of the pods are at their limit, and a pod requests one byte? Do you kill the requesting pod even though another pod may be using far more memory? Do you kill the pod with the current highest usage? A random pod?

To be honest I don’t know yet if a specific scheduling prioritizing algorithm is needed, but just with just limiting the whole jupyterhub namespace resources its enough.