Helm deploy jupyterhub, hub pod cannot access health check

Thank you for your answer! @manics

I modified the permissions of the data volume. Now the database can be initialized successfully.
But the hub still can’t get up and can’t connect to the proxy, but the proxy has started.
My kubernetes environment is installed using kubedm.

I refer to this and restart calico, but the result is the same.

Kubernetes - Api_request to the proxy failed with status code 599, retrying

The hub error is as follows:

# kubectl logs hub-58d9d8bc57-7wvgw -n limy-test
Loading /usr/local/etc/jupyterhub/secret/values.yaml
No config at /usr/local/etc/jupyterhub/existing-secret/values.yaml
[I 2021-09-29 11:18:40.895 JupyterHub app:2459] Running JupyterHub version 1.4.2
[I 2021-09-29 11:18:40.896 JupyterHub app:2489] Using Authenticator: jupyterhub.auth.DummyAuthenticator-1.4.2
[I 2021-09-29 11:18:40.896 JupyterHub app:2489] Using Spawner: kubespawner.spawner.KubeSpawner-1.1.0
[I 2021-09-29 11:18:40.896 JupyterHub app:2489] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-1.4.2
[W 2021-09-29 11:18:40.969 JupyterHub app:1793]
    JupyterHub.admin_users is deprecated since version 0.7.2.
    Use Authenticator.admin_users instead.
[I 2021-09-29 11:20:46.943 JupyterHub app:1838] Not using allowed_users. Any authenticated user will be allowed.
[I 2021-09-29 11:20:47.023 JupyterHub app:2526] Initialized 0 spawners in 0.005 seconds
[I 2021-09-29 11:20:47.032 JupyterHub app:2738] Not starting proxy
[W 2021-09-29 11:21:07.057 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...
[W 2021-09-29 11:21:27.251 JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying...
[E 2021-09-29 11:21:27.251 JupyterHub app:2969]
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/app.py", line 2967, in launch_instance_async
        await self.start()
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/app.py", line 2742, in start
        await self.proxy.get_all_routes()
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/proxy.py", line 898, in get_all_routes
        resp = await self.api_request('', client=client)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/proxy.py", line 862, in api_request
        result = await exponential_backoff(
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/utils.py", line 184, in exponential_backoff
        raise TimeoutError(fail_message)
    TimeoutError: Repeated api_request to proxy path "" failed.


 # kubectl describe pod hub-58d9d8bc57-7wvgw -n limy-test
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  5m46s                  default-scheduler  Successfully assigned limy-test/hub-58d9d8bc57-7wvgw to k8s-node
  Normal   Pulled     5m46s                  kubelet            Container image "jupyterhub/k8s-hub:1.1.3" already present on machine
  Normal   Created    5m45s                  kubelet            Created container hub
  Normal   Started    5m45s                  kubelet            Started container hub
  Warning  Unhealthy  45s (x108 over 5m45s)  kubelet            Readiness probe failed: Get "http://10.244.113.130:8081/hub/health": dial tcp 10.244.113.130:8081: connect: connection refused

jupyterhub service:

# kubectl get service -n limy-test -o wide
NAME           TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE     SELECTOR
hub            NodePort   10.97.5.21       <none>        8081:32714/TCP   5m59s   app=jupyterhub,component=hub,release=v3.7.0
proxy-api      NodePort   10.105.53.7      <none>        8001:30518/TCP   5m59s   app=jupyterhub,component=proxy,release=v3.7.0
proxy-public   NodePort   10.111.101.141   <none>        80:32561/TCP     5m59s   component=proxy,release=v3.7.0

jupyterhub pod:

# kubectl get pod -n limy-test
NAME                              READY   STATUS             RESTARTS   AGE     IP               NODE         NOMINATED NODE   READINESS GATES
continuous-image-puller-5h98k     1/1     Running            0          4m20s   10.244.235.193   k8s-master   <none>           <none>
continuous-image-puller-z5smk     1/1     Running            0          4m20s   10.244.113.131   k8s-node     <none>           <none>
hub-558b6485-2q6g5                0/1     CrashLoopBackOff   3          4m20s   10.244.113.133   k8s-node     <none>           <none>
proxy-ccd5f79bc-s85nz             1/1     Running            0          4m20s   10.244.113.132   k8s-node     <none>           <none>
user-scheduler-65b559c7c9-jncct   1/1     Running            0          4m20s   10.244.113.135   k8s-node     <none>           <none>
user-scheduler-65b559c7c9-mkgvv   1/1     Running            0          4m20s   10.244.113.134   k8s-node     <none>           <none>

kubernetes pod:

# kubectl get pod -n kube-system -o wide
NAME                                     READY   STATUS    RESTARTS   AGE   IP                NODE         NOMINATED NODE   READINESS GATES
calico-kube-controllers-8db96c76-44fqq   1/1     Running   0          17m   10.244.113.129    k8s-node     <none>           <none>
calico-node-2rs9d                        1/1     Running   0          17m   191.168.6.2       k8s-node     <none>           <none>
calico-node-qk2kb                        1/1     Running   0          17m   191.168.6.1       k8s-master   <none>           <none>
coredns-558bd4d5db-tct5q                 1/1     Running   3          50d   192.168.235.206   k8s-master   <none>           <none>
coredns-558bd4d5db-tmqww                 1/1     Running   3          50d   192.168.235.208   k8s-master   <none>           <none>
etcd-k8s-master                          1/1     Running   4          50d   191.168.6.1       k8s-master   <none>           <none>
kube-apiserver-k8s-master                1/1     Running   2          16d   191.168.6.1       k8s-master   <none>           <none>
kube-controller-manager-k8s-master       1/1     Running   8          50d   191.168.6.1       k8s-master   <none>           <none>
kube-proxy-dhdh6                         1/1     Running   4          39d   191.168.6.2       k8s-node     <none>           <none>
kube-proxy-m7dgl                         1/1     Running   3          39d   191.168.6.1       k8s-master   <none>           <none>
kube-scheduler-k8s-master                1/1     Running   7          50d   191.168.6.1       k8s-master   <none>           <none>
metrics-server-68b8ffb4c9-7ftzj          1/1     Running   2          12d   192.168.235.217   k8s-master   <none>           <none>

Helm version information:

[root@k8s-master kubespawner]# helm version
version.BuildInfo{Version:"v3.7.0", GitCommit:"eeac83883cb4014fe60267ec6373570374ce770b", GitTreeState:"clean", GoVersion:"go1.16.8"}

kubernetes version information:

[root@k8s-master kubespawner]# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}