I am wondering if there is an established way to prevent a Jupyterhub server from shutting down after a period of inactivity.
We have disabled culling. But still, the server automatically shuts off after a period of client disconnection (e.g. closing the browser), even when terminal processes are still running in the background. This makes it so that we cannot run long-running scripts unless we hold open a client browser connection. Is there a way to allow the server to stay on indefinitely to run background terminal processes, even without an active connection to a client/browser?
Thank you for your reply and for the topic suggestion – I will take a look at it.
It seems to be the entire singleuser server that shuts itself off (which also terminates all terminal windows and processes at the same time). It happens after a period of time – I can exit the browser and return to find it still active, but after maybe 30 min to an hour the entire server shuts off and has to restart, losing all open terminals/processes/notebooks in the process.
In terms of the use case, I’m trying to run some terminal processes in the background after exiting the browser (I’m not trying to keep any notebooks running, though).
If the linked thread doesn’t help please could you give us details of your JupyterHub deployment- how you installed it, what version of components are installed, and your configuration files with secrets redacted.
I know that this is quite old and sorry for replying here but even though that I have cull config at the top level, the pods are culled down and not sure what I’m missing here.
Check your JupyterHub logs to see if the hub is culling the images. If it is please turn on debug logging, share your hub logs, and your full configuration.
If it’s not then it might be your K8s cluster that’s terminating the pods, e.g. due to lack of resources, replacement of the node, etc.
So configuration wise I have the cull config in the correct place and wanted to confirm.
Now will go through logs, autoscaling etc but for the time being only 2 user pods where running and resources shouldn;t be the problem but makes sense to check everything until i understand why during weekend the node scaled down