Running Z2JH via EKS on AWS with an EFS file structure. I’m not sure which of these, if any, would be relevant, but I’m willing to provide any other details that may be helpful here; I’m just a little unsure where to start.
Basically - it seems like cell execution has a delay before actually starting to run the code, and I can’t figure out why? See the gif. (Note - there’s no action for ~10 seconds, it may look like the gif freezes, but that’s just he nature of what we’re talking about here).

You can see when I run the cell, the result isn’t completed for ~10 seconds and yet the run time displayed is in 3ms
. So it’s not like it ran but just took a while to run (extra load on the node or something), it just ‘held’ the job for a while?
The timings are for how long it executed on kernel by default. Based on that, the delay might be on server or networking level.
Thanks @krassowski -
Is there anything you can recommend for me to investigate which, and in turn what can be done about it?
Running jupyter-server with --debug
and observing the calls might help reveals something.
Not seeing a whole lot that is pointing to anything obvious to me. It can be hard to catch perfectly, with the rest of the logs being spammed and toggling between the logs and execution, but I think I got one execution where nothing was printing when I hit execute and nothing printed until the execution (a “short” delay of ~2-4s).
There’s line’s that show busy
/idle
, but those don’t seem to be where the delay is (again, before these lines where nothing was printed.
(I don’t think the kernel UUID is sensitive, but I changed it to <> just to be safe since I don’t know)
[D 2025-02-03 19:07:44.973 ServerApp] Checking user pastrami with scopes ['access:servers!server=pastrami/', 'read:users:groups!user=pastrami', 'read:users:name!user=pastrami'] against {'access:servers!server=pastrami/', 'access:servers!user=pastrami'}
[D 2025-02-03 19:07:44.973 ServerApp] Allowing user pastrami with scopes {'access:servers!server=pastrami/'}
[D 2025-02-03 19:07:49.774 ServerApp] activity on <>: execute_result
[D 2025-02-03 19:07:49.775 ServerApp] activity on <>: status (idle)
[D 2025-02-03 19:07:49.778 ServerApp] activity on <>: status (busy)
[D 2025-02-03 19:07:49.778 ServerApp] activity on <>: execute_input
[D 2025-02-03 19:07:49.780 ServerApp] activity on <>: status (idle)
[D 2025-02-03 19:07:49.897 ServerApp] activity on <>: status (busy)
[D 2025-02-03 19:07:49.928 ServerApp] activity on <>: execute_input
[D 2025-02-03 19:07:50.382 ServerApp] Checking user pastrami with scopes ['access:servers!server=pastrami/', 'read:users:groups!user=pastrami', 'read:users:name!user=pastrami'] against {'access:servers!server=pastrami/', 'access:servers!user=pastrami'}
[D 2025-02-03 19:07:50.382 ServerApp] Allowing user pastrami with scopes {'access:servers!server=pastrami/'}
Maybe there’s something else causing the delay that’s just not parsing logs? FWIW incase it matters - I ran --debug
through the Z2JH yaml settings.
...
singleuser:
extraEnv:
NOTEBOOK_ARGS: "--allow-root --debug"
...