is there a way to enforce memory limits on a standalone JupyterLab (i.e. one that was started directly from, e.g., the shell via
jupyter lab ...)?
Background of my question: I’m running a JupyerLab inside an HPC job on a multi-tenant node. The batch scheduler will kill my job if it consumes more memory than was requested. And I want to make sure JLab (and the kernels) don’t allocate more memory than they are allowed to. Another use case would be users directly starting their JupyterLab on a shared computer that they SSH into.
I know that if JLab was started from a Hub, I can set
mem_limits on the spawner. But here, I don’t have a spawner.