Hello,
I’m using the bitnami 5.2.9 helm chart, and whenever I create a new user using the default jupyterhub/os-shell & jupyterhub/base-notebook, the kernel keep looping on connecting. So I tried to use custom images, but the problem is persistent.
Here’s the conf I’m using now,
Spawn failed: Server at http://10.2.5.196:8888/user/test/ didn't respond in 30 seconds
Event log
Server requested
2024-08-14T07:53:40.201480Z [Normal] Successfully assigned jupyterhub/jupyterhub-jupyter-test to mlops-node-80985c
2024-08-14T07:53:56Z [Normal] AttachVolume.Attach succeeded for volume "ovh-managed-kubernetes-iydufv-pvc-f27d86c3-47ae-46c5-9cdb-be187817a9db"
2024-08-14T07:53:58Z [Normal] Pulling image "jupyter/minimal-notebook:2343e33dec46"
2024-08-14T07:53:59Z [Normal] Successfully pulled image "jupyter/minimal-notebook:2343e33dec46" in 669.808434ms (669.820549ms including waiting)
2024-08-14T07:53:59Z [Normal] Created container notebook
2024-08-14T07:53:59Z [Normal] Started container notebook
2024-08-14T07:54:02Z [Normal] Successfully pulled image "jupyter/minimal-notebook:2343e33dec46" in 661.133274ms (661.164722ms including waiting)
Spawn failed: Server at http://10.2.5.196:8888/user/test/ didn't respond in 30 seconds
This corresponds to a 6 year old image which is probably incompatible
Try using the latest image instead. If you still have problems please turn on debug logging, and share the logs from your hub, single-user server, and your browser console logs.
If your see problems relating to websocket connections in your browser console that indicates something is blocking them, e.g. an incorrectly configured ingress, firewall, proxy, etc.
Hello, thank you for your response
but it still occurs with any image I’m using (latest or not)
or it crashed due to a configuration issue.
Check the single-user server's logs for hints at what needs fixing.
2. The server started, but is not accessible at the specified URL.
This may be a configuration issue specific to your chosen Spawner.
Check the single-user server logs and resource to make sure the URL
is correct and accessible from the Hub.
3. (unlikely) Everything is working, but the server took too long to respond.
To fix: increase `Spawner.http_timeout` configuration
to a number of seconds that is enough for servers to become responsive.
[E 2024-08-14 14:56:11.104 JupyterHub gen:630] Exception in Future <Task finished name='Task-75' coro=<BaseHandler.spawn_single_user.<locals>.finish_user_spawn() done, defined at /opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/handlers/base.py:981> exception=TimeoutError("Server at http://10.2.5.198:8888/user/test/ didn't respond in 30 seconds")> after timeout
Traceback (most recent call last):
File "/opt/bitnami/miniconda/lib/python3.8/site-packages/tornado/gen.py", line 625, in error_callback
future.result()
File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/handlers/base.py", line 988, in finish_user_spawn
await spawn_future
File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/user.py", line 914, in spawn
await self._wait_up(spawner)
File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/user.py", line 958, in _wait_up
raise e
File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/user.py", line 928, in _wait_up
resp = await server.wait_up(
File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/utils.py", line 289, in wait_for_http_server
re = await exponential_backoff(
File "/opt/bitnami/miniconda/lib/python3.8/site-packages/jupyterhub/utils.py", line 237, in exponential_backoff
raise asyncio.TimeoutError(fail_message)
asyncio.exceptions.TimeoutError: Server at http://10.2.5.198:8888/user/test/ didn't respond in 30 seconds
I tried the solutions suggested and sitll same error
Can you turn on debug logging and share the full logs from both the JupyterHub and the singleuser server pods? Just sharing the error isn’t enough, the preceding logs usually contain some useful information.
Actually the bitnami inherit the different python scripts to init the hub from Z2JH helm charts, so yes enabling debugging worked I managed to connect to my server.
But the issue is the connecting Kernel that it’s stuck to inifinity I tried mulitple images but same issue.
If you see something related rto websockets it probably means your Kubernetes ingress, or whatever you’re using to proxy connections, is blocking websockets. It could also be something on your computer or network.
[W 2024-08-28 09:16:09.841 ServerApp] Notebook usr/Untitled.ipynb is not trusted
[I 2024-08-28 09:16:09.843 ServerApp] 200 GET /user/user/api/contents/usr/Untitled.ipynb?type=notebook&_=1724836569522 (user@10.2.4.131) 6.98ms
[W 2024-08-28 09:16:09.880 ServerApp] 404 GET /user/user/nbextensions/widgets/notebook/js/extension.js?v=20240828091452 (user@10.2.4.131) 5.72ms
[I 2024-08-28 09:16:09.962 ServerApp] 201 POST /user/user/api/sessions (user@10.2.4.131) 2.39ms
[I 2024-08-28 09:16:09.971 ServerApp] 200 GET /user/user/api/contents/usr/Untitled.ipynb/checkpoints?_=1724836569523 (user@10.2.4.131) 7.86ms
[I 2024-08-28 09:16:25.877 ServerApp] 200 GET /user/user/api/contents/usr/Untitled.ipynb?content=0&_=1724836569524 (user@10.2.4.131) 5.30ms
and yes looks like a websocket issue! thank you for your help, Ill try to find some inspirations (I changed my proxy service into a loadbalancer but sitll same issue)