Server never showed up at http://161.32.35.176:8888/user/niranjan/ after 30 seconds. Giving up

Jupyterhub hub pods are running successfully, but when we try to login , a new pod with user creates but it terminates after some 30-40 seconds , and there are no error logs in user pod.
But I can see some error logs in hub pod .

[W 2023-04-04 07:51:06.498 JupyterHub user:881] niranjan's server never showed up at http://161.32.35.176:8888/user/niranjan/ after 30 seconds. Giving up.
    
    Common causes of this timeout, and debugging tips:
    
    1. The server didn't finish starting,
       or it crashed due to a configuration issue.
       Check the single-user server's logs for hints at what needs fixing.
    2. The server started, but is not accessible at the specified URL.
       This may be a configuration issue specific to your chosen Spawner.
       Check the single-user server logs and resource to make sure the URL
       is correct and accessible from the Hub.
    3. (unlikely) Everything is working, but the server took too long to respond.
       To fix: increase `Spawner.http_timeout` configuration
       to a number of seconds that is enough for servers to become responsive.
    
[I 2023-04-04 07:51:06.499 JupyterHub spawner:2780] Deleting pod hub/jupyter-niranjan
[E 2023-04-04 07:51:10.657 JupyterHub gen:630] Exception in Future <Task finished name='Task-284' coro=<BaseHandler.spawn_single_user.<locals>.finish_user_spawn() done, defined at /usr/local/lib/python3.9/site-packages/jupyterhub/handlers/base.py:954> exception=TimeoutError("Server at http://161.32.35.176:8888/user/niranjan/ didn't respond in 30 seconds")> after timeout
    Traceback (most recent call last):
      File "/usr/local/lib/python3.9/site-packages/tornado/gen.py", line 625, in error_callback
        future.result()
      File "/usr/local/lib/python3.9/site-packages/jupyterhub/handlers/base.py", line 961, in finish_user_spawn
        await spawn_future
      File "/usr/local/lib/python3.9/site-packages/jupyterhub/user.py", line 862, in spawn
        await self._wait_up(spawner)
      File "/usr/local/lib/python3.9/site-packages/jupyterhub/user.py", line 906, in _wait_up
        raise e
      File "/usr/local/lib/python3.9/site-packages/jupyterhub/user.py", line 876, in _wait_up
        resp = await server.wait_up(
      File "/usr/local/lib/python3.9/site-packages/jupyterhub/utils.py", line 288, in wait_for_http_server
        re = await exponential_backoff(
      File "/usr/local/lib/python3.9/site-packages/jupyterhub/utils.py", line 236, in exponential_backoff
        raise asyncio.TimeoutError(fail_message)
    asyncio.exceptions.TimeoutError: Server at http://161.32.35.176:8888/user/niranjan/ didn't respond in 30 seconds

When we dont have istio enabled , the setup works fine and notebook is created and we are able to access. But when istio is enabled hub and other pod works , but user pod with notebook fails with above error in hub pod logs

It sounds like Istio is preventing connections between the hub and the singleuser servers. You’ll need to dig into how Istio works, figure out what it’s doing, and work out how to configure it to allow the required network traffic.

I am also facing the same issue and the logs says as below. I don’t have Istio on my k8s cluster … its a simple minimal deployment using Jupyter hub helm chart so probably I am missing some steps for sure

[I 2023-04-05 09:05:36.215 JupyterHub log:186] 200 GET /hub/api/users?state=[secret] (jupyterhub-idle-culler@::1) 8.52ms
[D 2023-04-05 09:05:36.468 JupyterHub utils:277] Server at http://traefik-daskhub-dask-gateway.jhub:80/services/dask-gateway/ responded with 404
[D 2023-04-05 09:05:36.468 JupyterHub proxy:392] Fetching routes to check
[D 2023-04-05 09:05:36.468 JupyterHub proxy:884] Proxy: Fetching GET http://proxy-api:8001/api/routes
[D 2023-04-05 09:05:36.469 JupyterHub proxy:395] Checking routes
[I 2023-04-05 09:05:36.470 JupyterHub app:3162] JupyterHub is now running, internal Hub API at http://hub:8081/hub/
[D 2023-04-05 09:05:36.471 JupyterHub app:2768] It took 10.936 seconds for the Hub to start
[D 2023-04-05 09:05:36.652 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.69ms
[D 2023-04-05 09:05:38.652 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.63ms
[D 2023-04-05 09:05:40.651 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.64ms
[D 2023-04-05 09:05:42.651 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.65ms
[D 2023-04-05 09:05:44.652 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.64ms
[D 2023-04-05 09:05:46.103 JupyterHub reflector:362] pods watcher timeout
[D 2023-04-05 09:05:46.103 JupyterHub reflector:281] Connecting pods watcher
[D 2023-04-05 09:05:46.133 JupyterHub reflector:362] events watcher timeout
[D 2023-04-05 09:05:46.133 JupyterHub reflector:281] Connecting events watcher
[D 2023-04-05 09:05:46.652 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.62ms
[D 2023-04-05 09:05:48.651 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.59ms
[D 2023-04-05 09:05:50.652 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.61ms
[D 2023-04-05 09:05:52.652 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.62ms
[D 2023-04-05 09:05:54.652 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.62ms
[D 2023-04-05 09:05:56.117 JupyterHub reflector:362] pods watcher timeout
[D 2023-04-05 09:05:56.117 JupyterHub reflector:281] Connecting pods watcher
[D 2023-04-05 09:05:56.156 JupyterHub reflector:362] events watcher timeout
[D 2023-04-05 09:05:56.156 JupyterHub reflector:281] Connecting events watcher
[D 2023-04-05 09:05:56.651 JupyterHub log:186] 200 GET /hub/health (@10.222.5.14) 0.60ms
[W 2023-04-05 09:05:57.788 JupyterHub user:881] hub's server never showed up at http://hub-55f9f7f97d-bgldx:34939/user/hub/ after 30 seconds. Giving up.

    Common causes of this timeout, and debugging tips:

    1. The server didn't finish starting,
       or it crashed due to a configuration issue.
       Check the single-user server's logs for hints at what needs fixing.
    2. The server started, but is not accessible at the specified URL.
       This may be a configuration issue specific to your chosen Spawner.
       Check the single-user server logs and resource to make sure the URL
       is correct and accessible from the Hub.
    3. (unlikely) Everything is working, but the server took too long to respond.
       To fix: increase `Spawner.http_timeout` configuration
       to a number of seconds that is enough for servers to become responsive.

[D 2023-04-05 09:05:57.788 JupyterHub user:930] Stopping hub
[I 2023-04-05 09:05:57.789 JupyterHub spawner:2780] Deleting pod jhub/jupyter-hub
[D 2023-04-05 09:05:57.884 JupyterHub user:950] Deleting oauth client jupyterhub-user-hub
[D 2023-04-05 09:05:57.892 JupyterHub user:953] Finished stopping hub
[E 2023-04-05 09:05:57.902 JupyterHub app:2496] hub does not appear to be running at http://hub-55f9f7f97d-bgldx:34939/user/hub/, shutting it down.
[D 2023-04-05 09:05:57.902 JupyterHub app:2520] hub not running
[D 2023-04-05 09:05:57.902 JupyterHub app:2564] Loaded users:
         hub
[I 2023-04-05 09:05:57.903 JupyterHub app:2844] Initialized 1 spawners in 31.846 seconds

Hi
I solved the issue by creating headless service.
The issue was hub was trying to connect to notebook pod directly using pod ip , but istio was blocking this direct routing.
So I created headless service with pod selector labels

kind: Service
metadata:
  name: "service-name"
  namespace: "namespace"
  annotations:
    networking.istio.io/exportTo: "." 
spec:
  type: ClusterIP
  clusterIP: None
  ports:
  - name: tcp
    port: port-no-svc
  selector:
    app: notebook-pod-selctor

@Niranjan_P_B we are also trying to deploy jupyterhub in istio enabled cluster. I’m not able to access jupyterhub when fetching via the url. Can you expand if you had faced the same issue and how did you solve it ?