Resource requirements of the Jupyterhub proxy

Hi all,

we are deploying Jupyterhub using the zero-to-jupyterhub helm chart. The default resource requests for the proxy are 0.2 CPU and 512MB memory. We also imposed that as a limit, which was probably a mistake. With ~80 concurrent users (jupyterlab, Rstudio) we saw the CPU usage in the proxy approach the 0.2 CPU limit. After some time, the proxy died (presumably it was throttled by kubernetes and could therefore not serve the requests anymore), which resulted in interruptions for the users. Memory consumption was well below the 512MB limit all the time.

Now I’m wondering if we just have to provide more resources to the proxy, or if the fact that <100 concurrent sessions create this amount of load on the proxy is an indication for another problem.
Specifically, how many concurrent user sessions would you expect the proxy to be able to handle with 0.2 CPU?

I should also mention that the proxy reported some socket hang ups, which is most likely related to a misconfiguration of nginx which is running in front of Jupyterhub. These socket hang ups don’t seem to directly affect the user experience, but I wonder if they might create an artificially high load on the proxy.

Thanks for any pointers in this matter!