Jupyterhub kernel keeps connecting

Hello,
I’m using the bitnami 5.2.9 helm chart, and whenever I create a new user using the default jupyterhub/os-shell & jupyterhub/base-notebook, the kernel keep looping on connecting. So I tried to use custom images, but the problem is persistent.
Here’s the conf I’m using now,

Chart:
  Name: jupyterhub
  Version: 5.2.9
Release:
  Name: jupyterhub
  Namespace: jupyterhub
  Service: Helm
hub:
  config:
    JupyterHub:
      admin_access: true
      authenticator_class: nativeauthenticator.NativeAuthenticator
      Authenticator:
        admin_users:
             - test
  concurrentSpawnLimit: 64
  consecutiveFailureLimit: 5
  activeServerLimit:
  db:
    type: postgres
    url: postgresql://postgres@mypostgres.kubegres.svc.cluster.local:5432/postgres
  services: {}
  allowNamedServers: false
  namedServerLimitPerUser:
  redirectToServer:
  shutdownOnLogout:
singleuser:
  networkTools:
    image:
      name: "jupyterhub/os-shell"
      tag: "11-debian-11-r91"
      digest: 
      pullPolicy: Always
  cloudMetadata:
    blockWithIptables: false
  events: true
  extraAnnotations:
  extraLabels:
    hub.jupyter.org/network-access-hub: "true"
    app.kubernetes.io/component: singleuser
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jupyterhub
    app.kubernetes.io/version: 4.0.2
    helm.sh/chart: jupyterhub-5.2.9
  uid: 1001
  fsGid: 1001
  serviceAccountName: jupyterhub-singleuser
  storage:
    type: dynamic
    extraLabels:
      app.kubernetes.io/component: singleuser
      app.kubernetes.io/instance: jupyterhub
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: jupyterhub
      app.kubernetes.io/version: 4.0.2
      helm.sh/chart: jupyterhub-5.2.9
    capacity: "10Gi"
    homeMountPath: /opt/bitnami/jupyterhub-singleuser
    dynamic:
      pvcNameTemplate: jupyterhub-claim-{username}{servername}
      volumeNameTemplate: jupyterhub-volume-{username}{servername}
      storageAccessModes:
        - ReadWriteOnce
      storageClass: csi-cinder-classic
  image:
    name: jupyter/minimal-notebook
    tag: 2343e33dec46
  profileList:
    - display_name: "Minimal environment"
      description: "To avoid too much bells and whistles: Python."
      default: true
    - display_name: "Datascience environment"
      description: "If you want the additional bells and whistles: Python, R, and Julia."
      kubespawner_override:
        image: jupyter/datascience-notebook:2343e33dec46
    - display_name: "Spark environment"
      description: "The Jupyter Stacks spark image!"
      kubespawner_override:
        image: jupyter/all-spark-notebook:2343e33dec46
    - display_name: "Learning Data Science"
      description: "Datascience Environment with Sample Notebooks"
      kubespawner_override:
        image: jupyter/datascience-notebook:2343e33dec46
        lifecycle_hooks:
          postStart:
            exec:
              command:
                - "sh"
                - "-c"
                - >
                  gitpuller https://github.com/data-8/materials-fa17 master materials-fa;
  podNameTemplate: jupyterhub-jupyter-{username}
  startTimeout: 3000
  cpu:
    limit: 1.0
    guarantee: 
  memory:
    limit: "1G"
    guarantee: 
  cmd: jupyterhub-singleuser
  defaultUrl: /tree/
  extraEnv:
    JUPYTERHUB_SINGLEUSER_APP: "jupyter_server.serverapp.ServerApp"
cull:
  enabled: true
  users: false
  removeNamedServers: false
  timeout: 36000
  every: 6000
  concurrency: 100
  maxAge: 0

With the new conf I cannot create my server,

Spawn failed: Server at http://10.2.5.196:8888/user/test/ didn't respond in 30 seconds
Event log
Server requested
2024-08-14T07:53:40.201480Z [Normal] Successfully assigned jupyterhub/jupyterhub-jupyter-test to mlops-node-80985c
2024-08-14T07:53:56Z [Normal] AttachVolume.Attach succeeded for volume "ovh-managed-kubernetes-iydufv-pvc-f27d86c3-47ae-46c5-9cdb-be187817a9db"
2024-08-14T07:53:58Z [Normal] Pulling image "jupyter/minimal-notebook:2343e33dec46"
2024-08-14T07:53:59Z [Normal] Successfully pulled image "jupyter/minimal-notebook:2343e33dec46" in 669.808434ms (669.820549ms including waiting)
2024-08-14T07:53:59Z [Normal] Created container notebook
2024-08-14T07:53:59Z [Normal] Started container notebook
2024-08-14T07:54:02Z [Normal] Successfully pulled image "jupyter/minimal-notebook:2343e33dec46" in 661.133274ms (661.164722ms including waiting)
Spawn failed: Server at http://10.2.5.196:8888/user/test/ didn't respond in 30 seconds

Any guidelines is appreciated, thank you

The Bitnami Helm chart is not the same as Z2JH

This corresponds to a 6 year old image which is probably incompatible

Try using the latest image instead. If you still have problems please turn on debug logging, and share the logs from your hub, single-user server, and your browser console logs.

If your see problems relating to websocket connections in your browser console that indicates something is blocking them, e.g. an incorrectly configured ingress, firewall, proxy, etc.

Hello, thank you for your response
but it still occurs with any image I’m using (latest or not)

       or it crashed due to a configuration issue.
       Check the single-user server's logs for hints at what needs fixing.
    2. The server started, but is not accessible at the specified URL.
       This may be a configuration issue specific to your chosen Spawner.
       Check the single-user server logs and resource to make sure the URL
       is correct and accessible from the Hub.
    3. (unlikely) Everything is working, but the server took too long to respond.
       To fix: increase `Spawner.http_timeout` configuration
       to a number of seconds that is enough for servers to become responsive.
    
[E 2024-08-14 14:56:11.104 JupyterHub gen:630] Exception in Future <Task finished name='Task-75' coro=<BaseHandler.spawn_single_user.<locals>.finish_user_spawn() done, defined at /opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/handlers/base.py:981> exception=TimeoutError("Server at http://10.2.5.198:8888/user/test/ didn't respond in 30 seconds")> after timeout
    Traceback (most recent call last):
      File "/opt/bitnami/miniconda/lib/python3.8/site-packages/tornado/gen.py", line 625, in error_callback
        future.result()
      File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/handlers/base.py", line 988, in finish_user_spawn
        await spawn_future
      File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/user.py", line 914, in spawn
        await self._wait_up(spawner)
      File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/user.py", line 958, in _wait_up
        raise e
      File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/user.py", line 928, in _wait_up
        resp = await server.wait_up(
      File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/utils.py", line 289, in wait_for_http_server
        re = await exponential_backoff(
      File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/utils.py", line 237, in exponential_backoff
        raise asyncio.TimeoutError(fail_message)
    asyncio.exceptions.TimeoutError: Server at http://10.2.5.198:8888/user/test/ didn't respond in 30 seconds

I tried the solutions suggested and sitll same error

Can you turn on debug logging and share the full logs from both the JupyterHub and the singleuser server pods? Just sharing the error isn’t enough, the preceding logs usually contain some useful information.

I didn’t find a real documentation for the debug on, I tried bunch of parameters

debug: 
 enabled: true

But couldoun’t get more loggs, am I doing something wrong?

To enable debug logs:

To retrieve the logs:

If that doesn’t work then maybe the Bitnami chart behaves slightly differently to Z2JH?

Actually the bitnami inherit the different python scripts to init the hub from Z2JH helm charts, so yes enabling debugging worked I managed to connect to my server.
But the issue is the connecting Kernel that it’s stuck to inifinity I tried mulitple images but same issue.

Can you show us the Console view?

If you see something related rto websockets it probably means your Kubernetes ingress, or whatever you’re using to proxy connections, is blocking websockets. It could also be something on your computer or network.

From the proxy pod,

08:59:36.152 [ConfigProxy] error: 503 GET /user/user/api/terminals connect ETIMEDOUT 10.2.4.137:8888
08:59:48.450 [ConfigProxy] error: 503 GET /user/user/api/kernels/0022afae-13ea-4b2a-805a-4edaec56d729 connect ETIMEDOUT 10.2.4.137:8888

on my server hub pod

[I 2024-08-28 09:14:52.337 JupyterHub log:191] 200 GET /hub/api/users/user/server/progress?_xsrf=[secret] (user@10.2.4.131) 20726.75ms
[I 2024-08-28 09:14:52.437 JupyterHub log:191] 302 GET /hub/spawn-pending/user -> /user/user/ (user@10.2.4.131) 6.84ms
[W 2024-08-28 09:14:52.558 JupyterHub log:191] 403 GET /hub/api/user (@10.2.4.139) 5.80ms
[I 2024-08-28 09:14:52.660 JupyterHub log:191] 302 GET /hub/api/oauth2/authorize?client_id=jupyterhub-user-user&redirect_uri=%2Fuser%2Fuser%2Foauth_callback&response_type=code&state=[secret] -> /user/user/oauth_callback?code=[secret]&state=[secret] (user@10.2.4.131) 45.02ms
[I 2024-08-28 09:14:52.771 JupyterHub log:191] 200 POST /hub/api/oauth2/token (user@10.2.4.139) 47.38ms
[I 2024-08-28 09:14:52.795 JupyterHub log:191] 200 GET /hub/api/user (user@10.2.4.139) 16.21ms
[I 2024-08-28 09:16:00.792 JupyterHub log:191] 302 GET / -> /hub/ (@10.2.4.131) 1.49ms
[I 2024-08-28 09:16:00.860 JupyterHub log:191] 302 GET /hub/ -> /user/user/ (user@10.2.4.131) 12.38ms
[I 2024-08-28 09:19:41.526 JupyterHub log:191] 200 POST /hub/api/users/user/activity (user@10.2.4.139) 15.90ms

and in the user pod

[W 2024-08-28 09:16:09.841 ServerApp] Notebook usr/Untitled.ipynb is not trusted
[I 2024-08-28 09:16:09.843 ServerApp] 200 GET /user/user/api/contents/usr/Untitled.ipynb?type=notebook&_=1724836569522 (user@10.2.4.131) 6.98ms
[W 2024-08-28 09:16:09.880 ServerApp] 404 GET /user/user/nbextensions/widgets/notebook/js/extension.js?v=20240828091452 (user@10.2.4.131) 5.72ms
[I 2024-08-28 09:16:09.962 ServerApp] 201 POST /user/user/api/sessions (user@10.2.4.131) 2.39ms
[I 2024-08-28 09:16:09.971 ServerApp] 200 GET /user/user/api/contents/usr/Untitled.ipynb/checkpoints?_=1724836569523 (user@10.2.4.131) 7.86ms
[I 2024-08-28 09:16:25.877 ServerApp] 200 GET /user/user/api/contents/usr/Untitled.ipynb?content=0&_=1724836569524 (user@10.2.4.131) 5.30ms

and yes looks like a websocket issue! thank you for your help, Ill try to find some inspirations (I changed my proxy service into a loadbalancer but sitll same issue)

Finnally, Adjusted the issue on my ingress I removed the headers.
I also added these annotations:

annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
    nginx.ingress.kubernetes.io/proxy-body-size: '0'
    nginx.ingress.kubernetes.io/ssl-redirect: 'true'