Jupyterhub server shuts down after period of inactivity

I am wondering if there is an established way to prevent a Jupyterhub server from shutting down after a period of inactivity.

We have disabled culling. But still, the server automatically shuts off after a period of client disconnection (e.g. closing the browser), even when terminal processes are still running in the background. This makes it so that we cannot run long-running scripts unless we hold open a client browser connection. Is there a way to allow the server to stay on indefinitely to run background terminal processes, even without an active connection to a client/browser?

Can you elaborate on what’s being shutdown- is it the whole singleuser server, a notebook/terminal, a kernel, or just a browser connection?

This topic may help with a couple of those:

Thank you for your reply and for the topic suggestion – I will take a look at it.

It seems to be the entire singleuser server that shuts itself off (which also terminates all terminal windows and processes at the same time). It happens after a period of time – I can exit the browser and return to find it still active, but after maybe 30 min to an hour the entire server shuts off and has to restart, losing all open terminals/processes/notebooks in the process.

In terms of the use case, I’m trying to run some terminal processes in the background after exiting the browser (I’m not trying to keep any notebooks running, though).

If the linked thread doesn’t help please could you give us details of your JupyterHub deployment- how you installed it, what version of components are installed, and your configuration files with secrets redacted.

Hello manics,

jupyterhub runs on a kubernetes cluster.
This is the YAML used to deploy it:

hub:
  cookieSecret: "xxx"
  db:
    type: sqlite-memory
  extraConfig:
    announcements: |
      c.JupyterHub.template_vars.update({ 'announcement': 'Report issues to tony_cricelli@berkeley.edu', })
  config:
    Authenticator:
      admin_users:
      - xxx
      allowed_users:
      - xxx
    GoogleOAuthenticator:
      client_id: xxx
      client_secret: xxx
      oauth_callback_url: xxx      
      hosted_domain:
        - berkeley.edu
      login_service: your Berkeley Account.
    JupyterHub:
      authenticator_class: google
proxy:
  secretToken: "xxx"
prePuller:
  enabled: true
singleuser:
  cull:
    enabled: false
  extraEnv:
    EDITOR: "vim"
  image:
    name: jupyter/datascience-notebook
    tag: latest
  memory:
    limit: 20G
    guarantee: 2G
  cpu:
    limit: 2.0
    guarantee: 1
  storage:
    type: hostPath
    extraVolumes:
      - name: home
        hostPath:
          path:  /mnt/jhub/2021/h2/homes/{username}
      - name: shared
        hostPath:
          path: /mnt/jhub/2021/h2/shared
    extraVolumeMounts:
      - name: home
        mountPath: /home/jovyan
      - name: shared
        mountPath: /home/jovyan/shared

Thanks, manics – Tony just posted the configuration above.

Your cull: configuration key should be at the top level, not under singleuser
https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#cull

Thanks manics, I just restarted the hub with

hub:
  cookieSecret: "xx"
  db:
    type: sqlite-memory
  extraConfig:
    cull:
      enabled: false

It needs to be at the very top level, not under anything else.

Thanks again for the help! It is the second line now:

hub:
  cull:
    enabled: false
  cookieSecret: "xx"
  db:
    type: sqlite-memory

No, still one level higher, outside hub!

Hi,

I know that this is quite old and sorry for replying here but even though that I have cull config at the top level, the pods are culled down and not sure what I’m missing here.

cull:
  enabled: false

singleuser:
  image:
    name: <image>
    tag: latest

I would like to keep pods up and running for ever, is this possible?

Check your JupyterHub logs to see if the hub is culling the images. If it is please turn on debug logging, share your hub logs, and your full configuration.

If it’s not then it might be your K8s cluster that’s terminating the pods, e.g. due to lack of resources, replacement of the node, etc.

Hi and thanks for the suggestions!

So configuration wise I have the cull config in the correct place and wanted to confirm.

Now will go through logs, autoscaling etc but for the time being only 2 user pods where running and resources shouldn;t be the problem but makes sense to check everything until i understand why during weekend the node scaled down

Thanks again, will get back with more info