Helm install on-prem K8

Hello,
I am trying to install Jupyterhub with Helm3 on a premise Kubernetes cluster.

helm upgrade --cleanup-on-fail --install $RELEASE jupyterhub/jupyterhub --namespace $NAMESPACE --create-namespace --version=0.10.6 --values config.yaml

The Helm install is always time-out with this log in the pod hook-image-awaiter-id

kubectl logs hook-image-awaiter-pzqzh -n jhub
2021/01/15 14:41:22 [ERR] GET https://kubernetes.default.svc:443/apis/apps/v1/namespaces/jhub/daemonsets/hook-image-puller request failed: Get “https://kubernetes.default.svc:443/apis/apps/v1/namespaces/jhub/daemonsets/hook-image-puller”: dial tcp: lookup kubernetes.default.svc on 10.96.0.10:53: read udp 10.44.0.3:51745->10.96.0.10:53: i/o timeout

the config.yaml look like this :

proxy:
  service:
    type: ClusterIP
  secretToken: <private_key>

Do you have an idea what is going wrong ? I am not sure if it is due to Kubernetes or the Helm chart.

Thanks

Do you have full admin control over your cluster? How was it setup?
Since the error is with the pre-puller you could try disabling it:
https://zero-to-jupyterhub.readthedocs.io/en/latest/administrator/optimization.html#pulling-images-before-users-arrive

Thanks for the quick answer ! That helps me going one step further.
It is a cluster I setup using kubeadm and 3 VM on a computer ( 1 master, 2 nodes). Just to play around before putting it in production.

Now the hub is failing with :

Not starting proxy
[W 2021-01-15 19:44:43.358 JupyterHub proxy:807] api_request to the proxy failed with status code 599

This post seems to be the same problem as me https://discourse.jupyter.org/t/proxy-error-599-when-i-try-to-make-an-offline-installation
But I am not sure to understand the solution given at the end.

At least it’s starting which is progress :grinning:
Could you paste the output of kubectl get pods and kubectl describe pods? This will help track down whether the api request failed because another pod failed to start, or if there’s a K8S networking problem.

Important info I forget to tell you, I added this in the config yaml to disable the persistent storage for the hub. I will add some nfs storage but It will need to wait monday for this part.

hub:
db:
type: sqlite-memory

singleuser:
storage:
type: sqlite-memory

NAME                              READY   STATUS             RESTARTS   AGE
continuous-image-puller-pkbbt     1/1     Running            0          89m
continuous-image-puller-tfxmp     1/1     Running            0          89m
hub-658d848b66-xjlbj              0/1     CrashLoopBackOff   9          32m
proxy-db789b56c-v9gj7             1/1     Running            0          31m
user-scheduler-7db7bfbdc6-k4g5h   1/1     Running            0          89m
user-scheduler-7db7bfbdc6-td67g   1/1     Running            0          89m
Name:         continuous-image-puller-pkbbt
Namespace:    jhub
Priority:     0
Node:         kubenode01/192.168.56.3
Start Time:   Fri, 15 Jan 2021 18:40:49 +0000
Labels:       app=jupyterhub
              component=continuous-image-puller
              controller-revision-hash=5b5d476f58
              pod-template-generation=1
              release=jhub
Annotations:  <none>
Status:       Running
IP:           10.36.0.1
IPs:
  IP:           10.36.0.1
Controlled By:  DaemonSet/continuous-image-puller
Init Containers:
  image-pull-metadata-block:
    Container ID:  docker://3919ec4ad5a15b388695a245e44d85d55180e82373db2ad2fb5a892a9a7cb3da
    Image:         jupyterhub/k8s-network-tools:0.10.6
    Image ID:      docker-pullable://jupyterhub/k8s-network-tools@sha256:8e638fcc6dcfe921a832585b0cdc5a3a523fcaa1fb8a5d6e0b8be87bec4ff7c3
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 15 Jan 2021 18:40:51 +0000
      Finished:     Fri, 15 Jan 2021 18:40:51 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        0
      memory:     0
    Environment:  <none>
    Mounts:       <none>
  image-pull-singleuser:
    Container ID:  docker://eba2ca4c529cff0658608eda6f0760e21a87ea0f7bd87cf1fa835a696ca9714b
    Image:         jupyterhub/k8s-singleuser-sample:0.10.6
    Image ID:      docker-pullable://jupyterhub/k8s-singleuser-sample@sha256:8bc191743dddf34d249693a226e9ece2f26f7b0e98b1a73f8f36056065e214f2
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 15 Jan 2021 18:40:53 +0000
      Finished:     Fri, 15 Jan 2021 18:40:53 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        0
      memory:     0
    Environment:  <none>
    Mounts:       <none>
Containers:
  pause:
    Container ID:   docker://009e26c108dee88b9c48e477afa0579e14981046ddba03262f16e755d48ebb25
    Image:          k8s.gcr.io/pause:3.2
    Image ID:       docker-pullable://k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 15 Jan 2021 18:40:54 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        0
      memory:     0
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:            <none>
QoS Class:          BestEffort
Node-Selectors:     <none>
Tolerations:        hub.jupyter.org/dedicated=user:NoSchedule
                    hub.jupyter.org_dedicated=user:NoSchedule
                    node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                    node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                    node.kubernetes.io/not-ready:NoExecute op=Exists
                    node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                    node.kubernetes.io/unreachable:NoExecute op=Exists
                    node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:             <none>


Name:         continuous-image-puller-tfxmp
Namespace:    jhub
Priority:     0
Node:         kubenode02/192.168.56.4
Start Time:   Fri, 15 Jan 2021 18:40:49 +0000
Labels:       app=jupyterhub
              component=continuous-image-puller
              controller-revision-hash=5b5d476f58
              pod-template-generation=1
              release=jhub
Annotations:  <none>
Status:       Running
IP:           10.44.0.1
IPs:
  IP:           10.44.0.1
Controlled By:  DaemonSet/continuous-image-puller
Init Containers:
  image-pull-metadata-block:
    Container ID:  docker://aef46284b42593ff7a9f516057997dcab25dbe8f29e015bb820b32853257e355
    Image:         jupyterhub/k8s-network-tools:0.10.6
    Image ID:      docker-pullable://jupyterhub/k8s-network-tools@sha256:8e638fcc6dcfe921a832585b0cdc5a3a523fcaa1fb8a5d6e0b8be87bec4ff7c3
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 15 Jan 2021 18:40:51 +0000
      Finished:     Fri, 15 Jan 2021 18:40:51 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        0
      memory:     0
    Environment:  <none>
    Mounts:       <none>
  image-pull-singleuser:
    Container ID:  docker://b5ac7f2ce49255b2935e3c09a72e23b5dede689e3a9123dc79c0b19290ffb495
    Image:         jupyterhub/k8s-singleuser-sample:0.10.6
    Image ID:      docker-pullable://jupyterhub/k8s-singleuser-sample@sha256:8bc191743dddf34d249693a226e9ece2f26f7b0e98b1a73f8f36056065e214f2
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 15 Jan 2021 18:40:52 +0000
      Finished:     Fri, 15 Jan 2021 18:40:52 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        0
      memory:     0
    Environment:  <none>
    Mounts:       <none>
Containers:
  pause:
    Container ID:   docker://22ed050f368b3f2fb2568b862eabe9ea5ad59f8879cd88183d63c9d784347b93
    Image:          k8s.gcr.io/pause:3.2
    Image ID:       docker-pullable://k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Fri, 15 Jan 2021 18:40:53 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        0
      memory:     0
    Environment:  <none>
    Mounts:       <none>
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:            <none>
QoS Class:          BestEffort
Node-Selectors:     <none>
Tolerations:        hub.jupyter.org/dedicated=user:NoSchedule
                    hub.jupyter.org_dedicated=user:NoSchedule
                    node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                    node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                    node.kubernetes.io/not-ready:NoExecute op=Exists
                    node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                    node.kubernetes.io/unreachable:NoExecute op=Exists
                    node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:             <none>


Name:         hub-658d848b66-xjlbj
Namespace:    jhub
Priority:     0
Node:         kubenode02/192.168.56.4
Start Time:   Fri, 15 Jan 2021 19:38:02 +0000
Labels:       app=jupyterhub
              component=hub
              hub.jupyter.org/network-access-proxy-api=true
              hub.jupyter.org/network-access-proxy-http=true
              hub.jupyter.org/network-access-singleuser=true
              pod-template-hash=658d848b66
              release=jhub
Annotations:  checksum/config-map: 17dc341cea103f6ea027910b562e2c1fa931d7f43abe15d1d5021307d9d6bd84
              checksum/secret: 2393130dbfbeee92db28eca42e53e7f2914b04653b9855aba7359474edfcba69
Status:       Running
IP:           10.44.0.2
IPs:
  IP:           10.44.0.2
Controlled By:  ReplicaSet/hub-658d848b66
Containers:
  hub:
    Container ID:  docker://3e484d3f7570759aeff0bb09c86b18a201d9686fb22c42df93ac381097e47e3d
    Image:         jupyterhub/k8s-hub:0.10.6
    Image ID:      docker-pullable://jupyterhub/k8s-hub@sha256:e9b52065b33d316fe48aaf1c14fe2ca1ca30980e810ab6712694b311f12e7be0
    Port:          8081/TCP
    Host Port:     0/TCP
    Args:
      jupyterhub
      --config
      /etc/jupyterhub/jupyterhub_config.py
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 15 Jan 2021 20:10:29 +0000
      Finished:     Fri, 15 Jan 2021 20:11:03 +0000
    Ready:          False
    Restart Count:  10
    Requests:
      cpu:      200m
      memory:   512Mi
    Readiness:  http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=3
    Environment:
      PYTHONUNBUFFERED:        1
      HELM_RELEASE_NAME:       jhub
      POD_NAMESPACE:           jhub (v1:metadata.namespace)
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'proxy.token' in secret 'hub-secret'>  Optional: false
    Mounts:
      /etc/jupyterhub/config/ from config (rw)
      /etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
      /etc/jupyterhub/secret/ from secret (rw)
      /etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
      /var/run/secrets/kubernetes.io/serviceaccount from hub-token-zqnzh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hub-config
    Optional:  false
  secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub-secret
    Optional:    false
  hub-token-zqnzh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub-token-zqnzh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  33m                   default-scheduler  Successfully assigned jhub/hub-658d848b66-xjlbj to kubenode02
  Normal   Pulling    33m                   kubelet            Pulling image "jupyterhub/k8s-hub:0.10.6"
  Normal   Pulled     32m                   kubelet            Successfully pulled image "jupyterhub/k8s-hub:0.10.6" in 50.952815087s
  Normal   Pulled     31m                   kubelet            Container image "jupyterhub/k8s-hub:0.10.6" already present on machine
  Normal   Created    31m (x2 over 32m)     kubelet            Created container hub
  Normal   Started    31m (x2 over 32m)     kubelet            Started container hub
  Warning  Unhealthy  23m (x106 over 32m)   kubelet            Readiness probe failed: Get "http://10.44.0.2:8081/hub/health": dial tcp 10.44.0.2:8081: connect: connection refused
  Warning  BackOff    3m6s (x110 over 31m)  kubelet            Back-off restarting failed container


Name:         proxy-db789b56c-v9gj7
Namespace:    jhub
Priority:     0
Node:         kubenode02/192.168.56.4
Start Time:   Fri, 15 Jan 2021 19:39:01 +0000
Labels:       app=jupyterhub
              component=proxy
              hub.jupyter.org/network-access-hub=true
              hub.jupyter.org/network-access-singleuser=true
              pod-template-hash=db789b56c
              release=jhub
Annotations:  checksum/hub-secret: 3149db99fd04fb4ca12999833eedc9035765c70e1c217091e369b70e20ac67a9
              checksum/proxy-secret: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
Status:       Running
IP:           10.44.0.3
IPs:
  IP:           10.44.0.3
Controlled By:  ReplicaSet/proxy-db789b56c
Containers:
  chp:
    Container ID:  docker://e51d92c7ee2b18c67725def52e9f1d294bbd8ae54d59bd983a53d8c974b60d5a
    Image:         jupyterhub/configurable-http-proxy:4.2.2
    Image ID:      docker-pullable://jupyterhub/configurable-http-proxy@sha256:81bd96729c14110aae677bd603854cab01107be18534d07b97a882e716bcdf7a
    Ports:         8000/TCP, 8001/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      configurable-http-proxy
      --ip=::
      --api-ip=::
      --api-port=8001
      --default-target=http://hub:$(HUB_SERVICE_PORT)
      --error-target=http://hub:$(HUB_SERVICE_PORT)/hub/error
      --port=8000
    State:          Running
      Started:      Fri, 15 Jan 2021 19:39:02 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      200m
      memory:   512Mi
    Liveness:   http-get http://:http/_chp_healthz delay=60s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get http://:http/_chp_healthz delay=0s timeout=1s period=2s #success=1 #failure=3
    Environment:
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'proxy.token' in secret 'hub-secret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j9hdn (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-j9hdn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-j9hdn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  32m   default-scheduler  Successfully assigned jhub/proxy-db789b56c-v9gj7 to kubenode02
  Normal   Pulled     32m   kubelet            Container image "jupyterhub/configurable-http-proxy:4.2.2" already present on machine
  Normal   Created    32m   kubelet            Created container chp
  Normal   Started    32m   kubelet            Started container chp
  Warning  Unhealthy  32m   kubelet            Readiness probe failed: Get "http://10.44.0.3:8000/_chp_healthz": dial tcp 10.44.0.3:8000: connect: connection refused


Name:         user-scheduler-7db7bfbdc6-k4g5h
Namespace:    jhub
Priority:     0
Node:         kubenode01/192.168.56.3
Start Time:   Fri, 15 Jan 2021 18:40:49 +0000
Labels:       app=jupyterhub
              component=user-scheduler
              pod-template-hash=7db7bfbdc6
              release=jhub
Annotations:  checksum/config-map: 5ca07792017036f4ad0509d5fcabdbe2e3c972a9e94ca111a0f16a13363682f1
Status:       Running
IP:           10.36.0.3
IPs:
  IP:           10.36.0.3
Controlled By:  ReplicaSet/user-scheduler-7db7bfbdc6
Containers:
  user-scheduler:
    Container ID:  docker://e0d6cef268856e5180539643615c9b48c5c2857bf2c5b6aa0a98044a93bda64a
    Image:         k8s.gcr.io/kube-scheduler:v1.19.2
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a1c3c6de52cdf46ff050ccfd31ff
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-scheduler
      --config=/etc/user-scheduler/config.yaml
      --authentication-skip-lookup=true
      --v=4
    State:          Running
      Started:      Fri, 15 Jan 2021 18:40:53 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        50m
      memory:     256Mi
    Liveness:     http-get http://:10251/healthz delay=15s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:10251/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/user-scheduler from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from user-scheduler-token-cxgn7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      user-scheduler
    Optional:  false
  user-scheduler-token-cxgn7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  user-scheduler-token-cxgn7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>


Name:         user-scheduler-7db7bfbdc6-td67g
Namespace:    jhub
Priority:     0
Node:         kubenode01/192.168.56.3
Start Time:   Fri, 15 Jan 2021 18:40:49 +0000
Labels:       app=jupyterhub
              component=user-scheduler
              pod-template-hash=7db7bfbdc6
              release=jhub
Annotations:  checksum/config-map: 5ca07792017036f4ad0509d5fcabdbe2e3c972a9e94ca111a0f16a13363682f1
Status:       Running
IP:           10.36.0.2
IPs:
  IP:           10.36.0.2
Controlled By:  ReplicaSet/user-scheduler-7db7bfbdc6
Containers:
  user-scheduler:
    Container ID:  docker://dce634cb967e92c2b0c15318e5632a9d288dcee9d0eaeddeb0a9f23d2c35078f
    Image:         k8s.gcr.io/kube-scheduler:v1.19.2
    Image ID:      docker-pullable://k8s.gcr.io/kube-scheduler@sha256:bb058c7394fad4d968d366b8b372698a1144a1c3c6de52cdf46ff050ccfd31ff
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/local/bin/kube-scheduler
      --config=/etc/user-scheduler/config.yaml
      --authentication-skip-lookup=true
      --v=4
    State:          Running
      Started:      Fri, 15 Jan 2021 18:40:52 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        50m
      memory:     256Mi
    Liveness:     http-get http://:10251/healthz delay=15s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get http://:10251/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/user-scheduler from config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from user-scheduler-token-cxgn7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      user-scheduler
    Optional:  false
  user-scheduler-token-cxgn7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  user-scheduler-token-cxgn7
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

Disabling storage whilst troubleshooting is a good thing- it’s one less thing that can go wrong :smiley: . Though note to disable the singleuser storage you should set singleuser.storage.type: none Customizing User Storage — Zero to JupyterHub with Kubernetes documentation

From your output:

Name:         hub-658d848b66-xjlbj
...
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  33m                   default-scheduler  Successfully assigned jhub/hub-658d848b66-xjlbj to kubenode02
  Normal   Pulling    33m                   kubelet            Pulling image "jupyterhub/k8s-hub:0.10.6"
  Normal   Pulled     32m                   kubelet            Successfully pulled image "jupyterhub/k8s-hub:0.10.6" in 50.952815087s
  Normal   Pulled     31m                   kubelet            Container image "jupyterhub/k8s-hub:0.10.6" already present on machine
  Normal   Created    31m (x2 over 32m)     kubelet            Created container hub
  Normal   Started    31m (x2 over 32m)     kubelet            Started container hub
  Warning  Unhealthy  23m (x106 over 32m)   kubelet            Readiness probe failed: Get "http://10.44.0.2:8081/hub/health": dial tcp 10.44.0.2:8081: connect: connection refused
  Warning  BackOff    3m6s (x110 over 31m)  kubelet            Back-off restarting failed container
Name:         proxy-db789b56c-v9gj7
...
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  32m   default-scheduler  Successfully assigned jhub/proxy-db789b56c-v9gj7 to kubenode02
  Normal   Pulled     32m   kubelet            Container image "jupyterhub/configurable-http-proxy:4.2.2" already present on machine
  Normal   Created    32m   kubelet            Created container chp
  Normal   Started    32m   kubelet            Started container chp
  Warning  Unhealthy  32m   kubelet            Readiness probe failed: Get "http://10.44.0.3:8000/_chp_healthz": dial tcp 10.44.0.3:8000: connect: connection refused

Your proxy is failing it’s readiness probe, which means the hub pod can’t connect to it so is therefore also failing. Could you check the logs of the proxy?

kubectl logs proxy-db789b56c-v9gj7

Okay out of the blue suddenly the proxy/hub works and everything seems fine and running. That has maybe something to do to my computer finally accepting to connect on the right network. :tada:

I put this in my config file:

    enabled: true
    hosts: 
    - ""
    annotations: 
       kubernetes.io/ingress.class: nginx
       nginx.ingress.kubernetes.io/force-ssl-redirect: false
       nginx.ingress.kubernetes.io/ssl-redirect: false
        nginx.ingress.kubernetes.io/rewrite-target: /jupyter/$1 

I did not think it would automatically create an ingress itself. I thought I would have to create the ingress my-self and this was just to setup it in the helm chart. Would be nice to add it in the documentation. (advanced/ingress)
So it is running but I am little lost. Where can I access the app then ?

kubectl describe ingress -n jhub
Name:             jupyterhub
Namespace:        jhub
Address:          192.168.56.3
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
  Host        Path  Backends
  ----        ----  --------
  *
              /jupyter   proxy-public:80 (10.36.0.6:8000)
Annotations:  kubernetes.io/ingress.class: nginx
              meta.helm.sh/release-name: jhub
              meta.helm.sh/release-namespace: jhub
              nginx.ingress.kubernetes.io/force-ssl-redirect: false
              nginx.ingress.kubernetes.io/rewrite-target: /jupyter/$1
              nginx.ingress.kubernetes.io/ssl-redirect: false
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    13m (x3 over 14m)  nginx-ingress-controller  Scheduled for sync 

I tried 192.168.56.3/hub but the connection is refused.

The nginx-controller show this error message:
Service "jhub/proxy-public" does not have any active Endpoint.

Thanks a lot for your help.

Wahou it works :tada:

The problem was that the created ingress had the wrong port. I changed it to put the same port on the proxy-public and everything was fixed.
But finally I switched to use metallb and that clearly more easy to setup and work with it.
Thanks you for your help @manics

1 Like

In case it’s useful, I put together a step-by-step guide to a functioning Z2JH deployment on either k3s or microk8s. This includes using a TLS deployment and optional NFS file sharing. I mainly did this to document the problems encountered with each setup.

Both microk8s and k3s provide very quick K8s setups. We recently migrated from using a Google GKE cluster to using microk8s across of on-prem nodes.

1 Like

Thanks you to developing these guides. I just tried microk8s because it would better suit my needs. However, again a new error… I put you an issue in your repo because I think it is more related to microk8 than jupyterhub. Tell me if you prefer we continue here.

Errors are frustrating but each time I am learning more and more so I guess that’s cool :slight_smile:
Have a good day.