How to limit memory in a standalone JupyterLab?

Hi all,

is there a way to enforce memory limits on a standalone JupyterLab (i.e. one that was started directly from, e.g., the shell via jupyter lab ...)?

Background of my question: I’m running a JupyerLab inside an HPC job on a multi-tenant node. The batch scheduler will kill my job if it consumes more memory than was requested. And I want to make sure JLab (and the kernels) don’t allocate more memory than they are allowed to. Another use case would be users directly starting their JupyterLab on a shared computer that they SSH into.

I know that if JLab was started from a Hub, I can set mem_limits on the spawner. But here, I don’t have a spawner.

Well, JupyterLab is more the UI than what happens in the background. I am not sure whether we can pass arguments to the Python Kernel but have you thought about using something like ?

There is an open Jupyter Enhancement Proposal about parameterized kernels: In the interim, I believe the only way to constrain resources is to launch a kernel in a container.

1 Like

(or make a custom kernel spec that passes the arguments)

For example,

$ cat ~./jupyter/kernels/mypython/kernel.json
 "argv": [
  <other args here>
 "display_name": "Python 3 Constrained",
 "language": "python"

@willirath you could “cage” JupyterLab and kernels making use of linux systemd

systemd-run creates and starts a detached execution environment on the fly:

sudo* systemd-run -t -p MemoryLimit=500M jupyter lab --ip= --port=8000 --notebook-dir=/whatever

*AFAIK you need root privileges to exec systemd-run command

In addition, cgroups could be used to limit memory usage on per user/group basis, e.g.:

# cat /etc/cgconfig.conf
group DEV {
    memory {
group DS {
    memory {

# cat /etc/cgrules.conf
user      memory   DS/
@group      memory   DEV/

I hope it helps


Try maybe? I read through the linked blog post and it seems to do exactly what you want.