Storage Configuration example on the on-prem

Hello, Members.

1. Overview

I’m a newbie about Z2JH (and k8s).

I installed Z2JH with the following command. As a result, the hub node shows the status PENDING.

kubectl describe show the following message.

0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 No preemption victims found for incoming pod

It may Storage setting missing. But I’m not sure how to fix it.
Does anyone tell me how to solve it?

Best regards.

2. Environment.

  • OS: Ubuntu 22.04
  • Helm: v3.12.3
  • Kubernetes: v1.27.4 (Self-Hosted)
  • Z2JH: 3.0.2

3. Installation

config.yaml

The document describes there is no need configuration file. So I tried the first with an empty config.yaml file. But It is still PENDING status.

Initialize a Helm chart configuration file

As of version 1.0.0, you don’t need any configuration to get started so you can just create a config.yaml file with some helpful comments.

hub:
  db:
    pvc:
      storageClassName: external-nfs
helm upgrade --cleanup-on-fail \
  --install jhub jupyterhub/jupyterhub \
  --namespace namespace-jupyterhub \
  --create-namespace \
  --version=3.0.2 \
  --values config.yaml

StorageClass configuration

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: external-nfs
provisioner: example.com/external-nfs
reclaimPolicy: Retain
parameters:
    server: 10.10.10.10
    path: /export_path
    readonly: "false"

4. Logs

kubectl get pods --all-namespaces
...
namespace-jupyterhub   continuous-image-puller-gj6sl                   1/1     Running   0                16h
namespace-jupyterhub   hub-bcc98d754-rk6g2                             0/1     Pending   0                16h
namespace-jupyterhub   proxy-7db55f77fb-xp46z                          1/1     Running   0                16h
namespace-jupyterhub   user-scheduler-5b448ff99-4lrk8                  1/1     Running   0                16h
namespace-jupyterhub   user-scheduler-5b448ff99-9x8r2                  1/1     Running   0                16h
kubectl describe pods --namespace=namespace-jupyterhub hub-bcc98d754-rk6g2
Name:             hub-bcc98d754-rk6g2
Namespace:        namespace-jupyterhub
Priority:         0
Service Account:  hub
Node:             <none>
Labels:           app=jupyterhub
                  component=hub
                  hub.jupyter.org/network-access-proxy-api=true
                  hub.jupyter.org/network-access-proxy-http=true
                  hub.jupyter.org/network-access-singleuser=true
                  pod-template-hash=bcc98d754
                  release=jhub
Annotations:      checksum/config-map: 5bcffc906ea430544c047723a2f4de5d7e99e45e61944810d7583ff255195a30
                  checksum/secret: ad1ccc6aa6084bb09465e8c1bc7c7fea9462da15a9ffd7f7e71448e4cd7cdd7e
Status:           Pending
IP:
IPs:              <none>
Controlled By:    ReplicaSet/hub-bcc98d754
Containers:
  hub:
    Image:      jupyterhub/k8s-hub:3.0.2
    Port:       8081/TCP
    Host Port:  0/TCP
    Args:
      jupyterhub
      --config
      /usr/local/etc/jupyterhub/jupyterhub_config.py
      --upgrade-db
    Liveness:   http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
    Readiness:  http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
    Environment:
      PYTHONUNBUFFERED:        1
      HELM_RELEASE_NAME:       jhub
      POD_NAMESPACE:           namespace-jupyterhub (v1:metadata.namespace)
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'>  Optional: false
    Mounts:
      /srv/jupyterhub from pvc (rw)
      /usr/local/etc/jupyterhub/config/ from config (rw)
      /usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
      /usr/local/etc/jupyterhub/secret/ from secret (rw)
      /usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8sg4p (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hub
    Optional:  false
  secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub
    Optional:    false
  pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  hub-db-dir
    ReadOnly:   false
  kube-api-access-8sg4p:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 hub.jupyter.org/dedicated=core:NoSchedule
                             hub.jupyter.org_dedicated=core:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  4m52s (x191 over 15h)  default-scheduler  0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 No preemption victims found for incoming pod..

You’ll need to look at what (if any) dynamic storage provisioners are supported by your K8s deployment- the K8s distributions offered by most public cloud include a provisioner.

If this is your own K8s deployment then just creating a StorageClass isn’t enough- you need something to supply the storage.

For example, Storage Classes | Kubernetes

Kubernetes doesn’t include an internal NFS provisioner. You need to use an external provisioner to create a StorageClass for NFS.

Hello, @manics Thank you for your reply.

The state has been changed to Running from Pending…
(But It is still unhealthy… yet)

1. Failed reason.

  • I needed to set up dynamic storage provisioners.
  • NFS does not support Internal provisioners.
  • I needed to set up an external provisioner using the NFS subdir.
  • I set up nfs-subdir-external-provisioner with the following command.
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=10.10.10.10 \
    --set nfs.path=/path_exports \
    --set nfs.mountOptions={"nfsvers=3"} \
    --set storageClass.reclaimPolicy=Retain \

2. Question

This is my understanding, Is this correct?

  • hub.db.pvc.storageClassName: must be the same name as the storage class name in the StorageClass.
  • And It isn’t necessary to same namespace Z2JB and StorageClass
kubectl get sc # (default namespace)
NAME         PROVISIONER                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client   cluster.local/nfs-subdir-external-provisioner   Retain          Immediate           true                   5m43s

And the status is still Unhealthy. How to check the logs?
Error: Readiness probe failed: Get "http://10.244.3.62:8081/hub/health": dial tcp 10.244.3.62:8081: connect: connection refused.

Thanks.

3. Logs

kubectl get pods --namespace=namespace-jupyterhub
NAME                             READY   STATUS    RESTARTS   AGE
continuous-image-puller-bsqnz    1/1     Running   0          39s
hub-7dc7b47ffc-pz766             0/1     Running   0          39s
proxy-55c8d5cc56-rlnwk           1/1     Running   0          39s
user-scheduler-5b448ff99-2ns7g   1/1     Running   0          39s
user-scheduler-5b448ff99-dprgs   1/1     Running   0          39s
kubectl describe pods --namespace=namespace-jupyterhub hub-7dc7b47ffc-pz766
Name:             hub-7dc7b47ffc-pz766
Namespace:        namespace-jupyterhub
Priority:         0
Service Account:  hub
Node:             k8node1/ip.add.re.ss
Start Time:       Tue, 29 Aug 2023 22:12:49 +0900
Labels:           app=jupyterhub
                  component=hub
                  hub.jupyter.org/network-access-proxy-api=true
                  hub.jupyter.org/network-access-proxy-http=true
                  hub.jupyter.org/network-access-singleuser=true
                  pod-template-hash=7dc7b47ffc
                  release=jhub
Annotations:      checksum/config-map: 5bcffc906ea430544c047723a2f4de5d7e99e45e61944810d7583ff255195a30
                  checksum/secret: 9fc0576ad91f0c583439bc4c3135d9e4a7c5618d67437e693f59267b00030a53
Status:           Running
IP:               10.244.3.78
IPs:
  IP:           10.244.3.78
Controlled By:  ReplicaSet/hub-7dc7b47ffc
Containers:
  hub:
    Container ID:  containerd://0a98f9b807dc3805d36370272b8c01dac2cdb86857dd3237330676ffc9a0ace7
    Image:         jupyterhub/k8s-hub:3.0.2
    Image ID:      docker.io/jupyterhub/k8s-hub@sha256:f8bb112dc09a9b47ac180f10055d7cef9ee115e521452cc829d0b0be1c185542
    Port:          8081/TCP
    Host Port:     0/TCP
    Args:
      jupyterhub
      --config
      /usr/local/etc/jupyterhub/jupyterhub_config.py
      --upgrade-db
    State:          Running
      Started:      Tue, 29 Aug 2023 22:12:52 +0900
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
    Readiness:      http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
    Environment:
      PYTHONUNBUFFERED:        1
      HELM_RELEASE_NAME:       jhub
      POD_NAMESPACE:           namespace-jupyterhub (v1:metadata.namespace)
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'>  Optional: false
    Mounts:
      /srv/jupyterhub from pvc (rw)
      /usr/local/etc/jupyterhub/config/ from config (rw)
      /usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
      /usr/local/etc/jupyterhub/secret/ from secret (rw)
      /usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68lgc (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hub
    Optional:  false
  secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub
    Optional:    false
  pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  hub-db-dir
    ReadOnly:   false
  kube-api-access-68lgc:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 hub.jupyter.org/dedicated=core:NoSchedule
                             hub.jupyter.org_dedicated=core:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                 From               Message
  ----     ------       ----                ----               -------
  Normal   Scheduled    71s                 default-scheduler  Successfully assigned namespace-jupyterhub/hub-7dc7b47ffc-pz766 to k8node1
  Warning  FailedMount  70s                 kubelet            MountVolume.SetUp failed for volume "secret" : failed to sync secret cache: timed out waiting for the condition
  Normal   Pulled       68s                 kubelet            Container image "jupyterhub/k8s-hub:3.0.2" already present on machine
  Normal   Created      68s                 kubelet            Created container hub
  Normal   Started      68s                 kubelet            Started container hub
  Warning  Unhealthy    30s (x21 over 68s)  kubelet            Readiness probe failed: Get "http://10.244.3.78:8081/hub/health": dial tcp 10.244.3.78:8081: connect: connection refused

This is my understanding, Is this correct?

Before you continue deploying JupyterHub I think it’ll be helpful to check your storage is working correctly. Can you try manually creating a pod and PVC, and seeing if the volume is created and bound? If so that suggests your storage is functional.

And the status is still Unhealthy. How to check the logs?

Try kubectl logs -n <namespace> <podname> or kubectl logs -n <namespace> deploy/hub

You can also try turning on debug logging, see

Hello, @manics Thank you for your reply.

I’ll try it manually (I’m not familiar k8s command, I’ll try it later)
At, least, NFS server was created a file /nfs_exports/namespace-jupyterhub-hub-db-dir-pvc-7ef32dce-31a0-4cae-9a5f-23a1d6e29d18/jupyterhub.sqlite. So It may work well.
(But the database size is 0 bytes.)

I turned on the debug option like the below

custom:
  hoge: foo
hub:
  db:
    pvc:
      storageClassName: nfs-client
debug:
  enabled: true # <-- Add here
helm upgrade --cleanup-on-fail \
  --install jhub jupyterhub/jupyterhub \
  --namespace namespace-jupyterhub \
  --create-namespace \
  --version=3.0.2 \
  --values config.yaml
kubectl logs --namespace=namespace-jupyterhub hub-6c5bb9467f-sfdrl
[D 2023-08-31 11:54:38.132 JupyterHub application:902] Looking for /usr/local/etc/jupyterhub/jupyterhub_config in /srv/jupyterhub
Loading /usr/local/etc/jupyterhub/secret/values.yaml
No config at /usr/local/etc/jupyterhub/existing-secret/values.yaml
[D 2023-08-31 11:54:38.360 JupyterHub application:923] Loaded config file: /usr/local/etc/jupyterhub/jupyterhub_config.py
[I 2023-08-31 11:54:38.375 JupyterHub app:2859] Running JupyterHub version 4.0.2
[I 2023-08-31 11:54:38.375 JupyterHub app:2889] Using Authenticator: jupyterhub.auth.DummyAuthenticator-4.0.2
[I 2023-08-31 11:54:38.375 JupyterHub app:2889] Using Spawner: kubespawner.spawner.KubeSpawner-6.0.0
[I 2023-08-31 11:54:38.375 JupyterHub app:2889] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-4.0.2
[D 2023-08-31 11:54:38.378 JupyterHub app:1833] Connecting to db: sqlite:///jupyterhub.sqlite

I’ll investigate the details.

Thanks again.

Hello, @manics

I could debug the problem with below commands.

kubectl exec --stdin --tty --namespace=namespace-jupyterhub hub-857c7cd46d-hm8wq -- /bin/bash

And I realized NFS doesn’t work correctly. Some port blocked on the NFS server. So I stopped ufw service.
After that, the status of hub service changed to Running.

namespace-jupyterhub   continuous-image-puller-zd9p8                      1/1     Running   0                5m34s
namespace-jupyterhub   hub-58494fc45-96pb2                                1/1     Running   0                5m33s
namespace-jupyterhub   nfs-subdir-external-provisioner-866b898dd7-x2vjv   1/1     Running   0                69m
namespace-jupyterhub   proxy-7fc76cc46c-jhwfb                             1/1     Running   0                5m33s
namespace-jupyterhub   user-scheduler-5b448ff99-7bt9z                     1/1     Running   0                5m33s
namespace-jupyterhub   user-scheduler-5b448ff99-jffbw                     1/1     Running   0                5m34s

It seems that the service is still Unhealty status. But the problem doesn’t relates to NFS.

describe pods --namespace=namespace-jupyterhub hub-58494fc45-96pb2
Name:             hub-58494fc45-96pb2
Namespace:        namespace-jupyterhub
Priority:         0
Service Account:  hub
Node:             k8node1/192.168.10.69
Start Time:       Thu, 31 Aug 2023 22:24:46 +0900
Labels:           app=jupyterhub
                  component=hub
                  hub.jupyter.org/network-access-proxy-api=true
                  hub.jupyter.org/network-access-proxy-http=true
                  hub.jupyter.org/network-access-singleuser=true
                  pod-template-hash=58494fc45
                  release=jhub
Annotations:      checksum/config-map: 5bcffc906ea430544c047723a2f4de5d7e99e45e61944810d7583ff255195a30
                  checksum/secret: 268e38edd0aebe44567695aed0d2c4c5363c92e041c1d41af2da3f84b00e99b5
Status:           Running
IP:               10.244.3.135
IPs:
  IP:           10.244.3.135
Controlled By:  ReplicaSet/hub-58494fc45
Containers:
  hub:
    Container ID:  containerd://3986fcd9eefad2cf6c95ec465dc2c5da3bd64286f6902370ddb1e4d291d49c1d
    Image:         jupyterhub/k8s-hub:3.0.2
    Image ID:      docker.io/jupyterhub/k8s-hub@sha256:f8bb112dc09a9b47ac180f10055d7cef9ee115e521452cc829d0b0be1c185542
    Port:          8081/TCP
    Host Port:     0/TCP
    Args:
      jupyterhub
      --config
      /usr/local/etc/jupyterhub/jupyterhub_config.py
      --debug
      --upgrade-db
    State:          Running
      Started:      Thu, 31 Aug 2023 22:24:47 +0900
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
    Readiness:      http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
    Environment:
      PYTHONUNBUFFERED:        1
      HELM_RELEASE_NAME:       jhub
      POD_NAMESPACE:           namespace-jupyterhub (v1:metadata.namespace)
      CONFIGPROXY_AUTH_TOKEN:  <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'>  Optional: false
    Mounts:
      /srv/jupyterhub from pvc (rw)
      /usr/local/etc/jupyterhub/config/ from config (rw)
      /usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
      /usr/local/etc/jupyterhub/secret/ from secret (rw)
      /usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lpl62 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      hub
    Optional:  false
  secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub
    Optional:    false
  pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  hub-db-dir
    ReadOnly:   false
  kube-api-access-lpl62:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 hub.jupyter.org/dedicated=core:NoSchedule
                             hub.jupyter.org_dedicated=core:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  10m                default-scheduler  Successfully assigned namespace-jupyterhub/hub-58494fc45-96pb2 to k8node1
  Normal   Pulled     10m                kubelet            Container image "jupyterhub/k8s-hub:3.0.2" already present on machine
  Normal   Created    10m                kubelet            Created container hub
  Normal   Started    10m                kubelet            Started container hub
  Warning  Unhealthy  10m (x3 over 10m)  kubelet            Readiness probe failed: Get "http://10.244.3.135:8081/hub/health": dial tcp 10.244.3.135:8081: connect: connection refused

I succeed showing login UI. So probably storage issue have been fixed.

The following error probably k8s node issue.

2023-08-31T13:46:30.720811Z [Warning] 0/4 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/4 nodes are available: 4 No preemption victims found for incoming pod..