JupyterHub not launching on Helm | K8s

I have a metalLB loadbalancer, k8s clusters (one master and one worker) v1.18.5, helm 3.7, and nfs dynamic volume provisioning using helm. I run up a jupyterhub instance with helm. Within a minute everything is set up but when I use the external IP to open JupyterHub on my browser, noting loads up. here is my kubectl get all

pod/continuous-image-puller-4l5gj                      1/1     Running   0          23s
pod/hub-6c9cb48df8-k5t4w                               1/1     Running   0          23s
pod/nfs-subdir-external-provisioner-789697969b-hqp46   1/1     Running   0          23h
pod/nginx2-669c86457c-hc5mv                            1/1     Running   0          35h
pod/proxy-66cb767659-svwbv                             1/1     Running   0          23s
pod/user-scheduler-6d4698dd59-wqw9l                    1/1     Running   0          23s
pod/user-scheduler-6d4698dd59-zk4c7                    1/1     Running   0          23s

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/hub            ClusterIP   <none>        8081/TCP       23s
service/kubernetes     ClusterIP       <none>        443/TCP        39h
service/nginx2         LoadBalancer    80:30746/TCP   32h
service/proxy-api      ClusterIP   <none>        8001/TCP       23s
service/proxy-public   LoadBalancer    80:31336/TCP   23s

NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/continuous-image-puller   1         1         1       1            1           <none>          23s

NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hub                               1/1     1            1           23s
deployment.apps/nfs-subdir-external-provisioner   1/1     1            1           23h
deployment.apps/nginx2                            1/1     1            1           35h
deployment.apps/proxy                             1/1     1            1           23s
deployment.apps/user-scheduler                    2/2     2            2           23s

NAME                                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/hub-6c9cb48df8                               1         1         1       23s
replicaset.apps/nfs-subdir-external-provisioner-789697969b   1         1         1       23h
replicaset.apps/nginx2-669c86457c                            1         1         1       35h
replicaset.apps/proxy-66cb767659                             1         1         1       23s
replicaset.apps/user-scheduler-6d4698dd59                    2         2         2       23s

NAME                                READY   AGE
statefulset.apps/user-placeholder   0/0     23s

Also, below is my storage class for reference: kubectl get sc

nfs-client   cluster.local/nfs-subdir-external-provisioner   Delete          Immediate           true                   23h

I will not paste the config file as it is very large, basically what I did was
helm show values jupyterhub/jupyterhub > /tmp/jupyterhub.yaml
(after changing some values)
helm install jupyterhub jupyterhub/jupyterhub --values /tmp/jupyterhub.yaml
The only few things I changed was the security-key (hex [as mentioned on the website]) along with writing nfs-client wherever it said storageClass and storageClassName and perhaps altering the storage size (1Gi/2Gi). That’s all. The LoadBalancer works fine because I ran nginx and I can easily open it up on my browser. So I decided to check the JupyterHub pod’s by first getting the pod’s name using: kubectl get pods

NAME                                               READY   STATUS    RESTARTS   AGE
continuous-image-puller-4l5gj                      1/1     Running   0          20m
hub-6c9cb48df8-k5t4w                               1/1     Running   0          20m
nfs-subdir-external-provisioner-789697969b-hqp46   1/1     Running   0          23h
nginx2-669c86457c-hc5mv                            1/1     Running   0          35h
proxy-66cb767659-svwbv                             1/1     Running   0          20m
user-scheduler-6d4698dd59-wqw9l                    1/1     Running   0          20m
user-scheduler-6d4698dd59-zk4c7                    1/1     Running   0          20m

and then using kubectl describe pod hub-6c9cb48df8-k5t4w -n default which gave me this:

Name:         hub-6c9cb48df8-k5t4w
Namespace:    default
Priority:     0
Node:         worker/
Start Time:   Sat, 27 Nov 2021 10:21:43 +0000
Labels:       app=jupyterhub
Annotations:  checksum/config-map: f746d7e563a064e9158fe6f7f59bdbd463ed24ad7a927d75a1f18c022c3afeaf
              checksum/secret: 926186a1b18e5cb9aa5b8c0a177f379299bcf0f05ac4de17d1958422054d15e5
Status:       Running
Controlled By:  ReplicaSet/hub-6c9cb48df8
    Container ID:  docker://1d5e3a812f9712f6d59c09d855b034e2f6bc3e058bad4932db87145ec09f70d1
    Image:         jupyterhub/k8s-hub:1.2.0
    Image ID:      docker-pullable://jupyterhub/k8s-hub@sha256:e4770285aaf7230b930643986221757c2cc2e9420f5e21ac892582c96a57ce1c
    Port:          8081/TCP
    Host Port:     0/TCP
    State:          Running
      Started:      Sat, 27 Nov 2021 10:21:45 +0000
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
    Readiness:      http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
      PYTHONUNBUFFERED:        1
      HELM_RELEASE_NAME:       jupyterhub
      POD_NAMESPACE:           default (v1:metadata.namespace)
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'>  Optional: false
      /srv/jupyterhub from pvc (rw)
      /usr/local/etc/jupyterhub/config/ from config (rw)
      /usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
      /usr/local/etc/jupyterhub/secret/ from secret (rw)
      /usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
      /var/run/secrets/kubernetes.io/serviceaccount from hub-token-zd25x (ro)
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hub
    Optional:  false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub
    Optional:    false
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  hub-db-dir
    ReadOnly:   false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub-token-zd25x
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     hub.jupyter.org/dedicated=core:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  21m                default-scheduler  Successfully assigned default/hub-6c9cb48df8-k5t4w to worker
  Normal   Pulled     21m                kubelet, worker    Container image "jupyterhub/k8s-hub:1.2.0" already present on machine
  Normal   Created    21m                kubelet, worker    Created container hub
  Normal   Started    21m                kubelet, worker    Started container hub
  Warning  Unhealthy  21m (x3 over 21m)  kubelet, worker    Readiness probe failed: Get dial tcp connect: connection refused

So I know that the pod is unhealthy. But I do not have any other details to debug this. Any help on how to fix or debug this would be highly appreciated.

Thank you!

I understand from kubernetes - JupyterHub Not Loading Up with Given External-IP| K8s | Helm - Stack Overflow that you have made the hub pod healthy at this point, and are now in a state where you only await the IP from the k8s Service resource…

The jupyterHub external-ip however, keeps loading infinitely with no errors.

But, it seems from here, you have received an external ip for the k8s service resource…

service/proxy-public   LoadBalancer    80:31336/TCP   23s

So I’m not really sure what problem you have, but it sounds like a non-JupyterHub related issue more specific to accessing a k8s cluster exposed via a MetalLB and a k8s Service having a type: LoadBalancer.

The Hub pod is unhealthy. I still get the issue “Readiness probe failed” and this is out of the box. I did not change anything.