Hello
I am deploying JupyterHub in our Kubernetes environment. I noticed that the pods of the user-notebook are not killed due to inactivity, but they are when the user performs an explicit logout.
This is my logs:
[D 2023-10-30 14:25:26.988 JupyterHub reflector:289] Connecting pods watcher
[D 2023-10-30 14:25:37.008 JupyterHub reflector:374] pods watcher timeout
[D 2023-10-30 14:25:37.009 JupyterHub reflector:289] Connecting pods watcher
[D 2023-10-30 14:25:47.026 JupyterHub reflector:374] pods watcher timeout
[D 2023-10-30 14:25:47.026 JupyterHub reflector:289] Connecting pods watcher
[D 2023-10-30 14:25:48.309 JupyterHub base:344] Refreshing auth for dgharsallaoui
[D 2023-10-30 14:25:48.310 JupyterHub scopes:877] Checking access to /hub/api/users/dgharsallaoui/activity via scope users:activity
[D 2023-10-30 14:25:48.310 JupyterHub scopes:690] Argument-based access to /hub/api/users/dgharsallaoui/activity via users:activity
[D 2023-10-30 14:25:48.313 JupyterHub users:879] Not updating activity for <User(dgharsallaoui 1/1 running)>: 2023-10-30T14:22:37.533380Z < 2023-10-30T14:22:40.773000Z
[D 2023-10-30 14:25:48.313 JupyterHub users:900] Not updating server activity on dgharsallaoui/: 2023-10-30T14:22:37.533380Z < 2023-10-30T14:22:40.773000Z
[I 2023-10-30 14:25:48.314 JupyterHub log:191] 200 POST /hub/api/users/dgharsallaoui/activity (dgharsallaoui@::ffff:10.0.3.1) 24.07ms
[D 2023-10-30 14:25:57.037 JupyterHub reflector:374] pods watcher timeout
[D 2023-10-30 14:25:57.037 JupyterHub reflector:289] Connecting pods watcher
[D 2023-10-30 14:26:07.051 JupyterHub reflector:374] pods watcher timeout
[D 2023-10-30 14:26:07.052 JupyterHub reflector:289] Connecting pods watcher
[D 2023-10-30 14:26:17.064 JupyterHub reflector:374] pods watcher timeout
[D 2023-10-30 14:26:17.065 JupyterHub reflector:289] Connecting pods watcher
[D 2023-10-30 14:26:27.080 JupyterHub reflector:374] pods watcher timeout
[D 2023-10-30 14:26:27.080 JupyterHub reflector:289] Connecting pods watcher
[D 2023-10-30 14:26:37.091 JupyterHub reflector:374] pods watcher timeout```
minrk
November 2, 2023, 7:18pm
2
Can you share some configuration? How long is the user inactive, and what does the admin panel or API say about their activity?
The logs you have shared only show one instance of an active server refreshing its activity, which doesnโt tell us much.
Thank you for your response.
The user was inactive for more than 24 hours.
I disabled the admin access.
This is my server configuration:
root@notebook-759698cbc-rqs9b:/srv/jupyterhub# cat /jupyterhub_config.py
import os
#JupyterHub config
c.Application.log_level = 'DEBUG'
c.JupyterHub.active_server_limit = 0
c.JupyterHub.admin_access = False
c.Authenticator.admin_users = "notebook-jupyterhub"
c.api_tokens= "2vJEqh0qfzQfw1B5hjS5ezfzfv"
#c.JupyterHub.hub_ip = "0.0.0.0"
#c.JupyterHub.port = 8000
c.JupyterHub.bind_url = 'http://:8000'
# LDAP config
c.JupyterHub.authenticator_class = 'ldapauthenticator.LDAPAuthenticator'
c.LDAPAuthenticator.server_address= "10.12.25.2"
c.LDAPAuthenticator.use_ssl= True
c.LDAPAuthenticator.bind_dn_template = ['cn={username},ou=person,dc=org,dc=cloud']
c.LDAPAuthenticator.lookup_dn_user_dn_attribute = 'cn'
c.LDAPAuthenticator.admin_users = {'notebook-jupyterhub'}
#Spawner config
c.KubeSpawner.image_pull_secrets= 'images'
c.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'
c.JupyterHub.shutdown_on_logout = True
c.Spawner.hub_connect_url = 'http://notebook.interaction.svc:8000'
c.KubeSpawner.image = os.getenv('JUPYTER_IMAGE', 'images.foundation.svc/notebook-jupyter-notebook:ad50769689529-default')
c.KubeSpawner.namespace ='interaction'
c.KubeSpawner.node_selector = {'platform.interaction.observability': "true"}
c.KubeSpawner.pod_name_template= 'notebook-{unescaped_username}'
c.KubeSpawner.poll_interval= 60
c.KubeSpawner.pvc_name_template = c.KubeSpawner.pod_name_template
c.KubeSpawner.secret_name_template = c.KubeSpawner.pod_name_template
c.KubeSpawner.storage_access_modes= ['ReadWriteOnce']
c.KubeSpawner.storage_capacity= '5Gi'
c.KubeSpawner.storage_class= 'block'
c.KubeSpawner.storage_extra_labels = {'mrsn': 'notebook'}
c.KubeSpawner.storage_pvc_ensure = True
c.KubeSpawner.fs_gid = 1000
c.KubeSpawner.volumes = [
{
'name': 'user-data',
'persistentVolumeClaim': {
'claimName': c.KubeSpawner.pvc_name_template
}
}
,{
'name':'shared',
'persistentVolumeClaim': {
'claimName': 'notebook-shared'
}
}
]
c.KubeSpawner.volume_mounts = [
{
'mountPath': '/home/jovyan/private',
'name': 'user-data'
},
{
'mountPath': '/home/jovyan/shared',
'name': 'shared'
}
]
c.KubeSpawner.events_enabled = False
c.KubeSpawner.cpu_limit = 4
c.KubeSpawner.mem_limit = "8G"
c.KubeSpawner.mem_guarantee = "1G"
c.KubeSpawner.cpu_guarantee = 0.1
minrk
November 3, 2023, 12:14pm
4
And what is their last activity, as reported in the admin panel or API?
And where are you running the idle culler, and with what configuration? Or have you configured internal self-culling?
I am using the jupyterhub 4.0.2
and jupyterhub-kubespawner 6.1.0
.
I resolved the issue by adding the jupyterhub-idle-culler.
Thank you !