I am currently deploying JupyterHub on a Kubernetes cluster. I am able to get to the login screen, but am unable to spawn users.
I suspect the cause may be in the persistent volume settings. But I have no idea how to improve it
I have included the yaml file below.
# storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage-class
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
# pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage-class
local:
path: /mnt/disks/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- sample-worker
- sample-worker2
# pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage-class
resources:
requests:
storage: 1Gi
The current status of the cluster is as follows
$ kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/vectoradd 0/1 Completed 0 4d6h
gpu-operator pod/gpu-feature-discovery-cw5kt 1/1 Running 0 4d6h
gpu-operator pod/gpu-feature-discovery-d4r5h 1/1 Running 0 4d6h
gpu-operator pod/gpu-operator-1673067409-node-feature-discovery-master-5646fc92q 1/1 Running 0 4d6h
gpu-operator pod/gpu-operator-1673067409-node-feature-discovery-worker-4k2kz 1/1 Running 2 (4d6h ago) 4d6h
gpu-operator pod/gpu-operator-1673067409-node-feature-discovery-worker-jxsk5 1/1 Running 0 4d6h
gpu-operator pod/gpu-operator-5c9754f456-xt9vf 1/1 Running 2 (4d6h ago) 4d6h
gpu-operator pod/nvidia-container-toolkit-daemonset-47qzg 1/1 Running 0 4d6h
gpu-operator pod/nvidia-container-toolkit-daemonset-56dpw 1/1 Running 0 4d6h
gpu-operator pod/nvidia-cuda-validator-rbwp6 0/1 Completed 0 4d6h
gpu-operator pod/nvidia-cuda-validator-xnckd 0/1 Completed 0 4d6h
gpu-operator pod/nvidia-dcgm-exporter-4ld6z 1/1 Running 0 4d6h
gpu-operator pod/nvidia-dcgm-exporter-bqf2j 1/1 Running 0 4d6h
gpu-operator pod/nvidia-device-plugin-daemonset-46zwc 1/1 Running 0 4d6h
gpu-operator pod/nvidia-device-plugin-daemonset-89x5z 1/1 Running 0 4d6h
gpu-operator pod/nvidia-device-plugin-validator-5ltr4 0/1 Completed 0 4d6h
gpu-operator pod/nvidia-device-plugin-validator-r4c65 0/1 Completed 0 4d6h
gpu-operator pod/nvidia-operator-validator-bxwlg 1/1 Running 0 4d6h
gpu-operator pod/nvidia-operator-validator-jf7lb 1/1 Running 0 4d6h
jhub pod/continuous-image-puller-9k565 1/1 Running 0 148m
jhub pod/continuous-image-puller-tsqgh 1/1 Running 0 148m
jhub pod/hub-ccd996468-4f9hv 1/1 Running 1 (26m ago) 83m
jhub pod/proxy-9fcc96474-9swgx 1/1 Running 0 89m
jhub pod/user-scheduler-5cb7469479-656v8 1/1 Running 0 148m
jhub pod/user-scheduler-5cb7469479-cbdnv 1/1 Running 0 148m
kube-flannel pod/kube-flannel-ds-dsgzs 1/1 Running 0 5d1h
kube-flannel pod/kube-flannel-ds-knsg6 1/1 Running 0 5d2h
kube-flannel pod/kube-flannel-ds-rfmgc 1/1 Running 1 (5d1h ago) 5d1h
kube-system pod/coredns-787d4945fb-6hqsr 1/1 Running 0 5d2h
kube-system pod/coredns-787d4945fb-pc5rg 1/1 Running 0 5d2h
kube-system pod/etcd-onoyama-master 1/1 Running 0 5d2h
kube-system pod/kube-apiserver-onoyama-master 1/1 Running 0 5d2h
kube-system pod/kube-controller-manager-onoyama-master 1/1 Running 2 (4d6h ago) 5d2h
kube-system pod/kube-proxy-kvcfn 1/1 Running 0 5d2h
kube-system pod/kube-proxy-trvfg 1/1 Running 0 5d1h
kube-system pod/kube-proxy-zw5qp 1/1 Running 0 5d1h
kube-system pod/kube-scheduler-onoyama-master 1/1 Running 2 (4d6h ago) 5d2h
kubernetes-dashboard pod/dashboard-metrics-scraper-5f8ccd7998-lz79c 1/1 Running 0 3d5h
kubernetes-dashboard pod/kubernetes-dashboard-bf69db5db-kv5tz 1/1 Running 0 3d5h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d2h
gpu-operator service/gpu-operator ClusterIP 10.101.188.195 <none> 8080/TCP 4d6h
gpu-operator service/gpu-operator-1673067409-node-feature-discovery-master ClusterIP 10.105.105.90 <none> 8080/TCP 4d6h
gpu-operator service/nvidia-dcgm-exporter ClusterIP 10.102.214.218 <none> 9400/TCP 4d6h
jhub service/hub ClusterIP 10.108.183.35 <none> 8081/TCP 148m
jhub service/proxy-api ClusterIP 10.98.234.170 <none> 8001/TCP 148m
jhub service/proxy-public LoadBalancer 10.96.224.166 <pending> 80:31463/TCP 148m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d2h
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.97.126.241 <none> 8000/TCP 3d7h
kubernetes-dashboard service/kubernetes-dashboard ClusterIP 10.110.201.186 <none> 443/TCP 3d7h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
gpu-operator daemonset.apps/gpu-feature-discovery 2 2 2 2 2 nvidia.com/gpu.deploy.gpu-feature-discovery=true 4d6h
gpu-operator daemonset.apps/gpu-operator-1673067409-node-feature-discovery-worker 2 2 2 2 2 <none> 4d6h
gpu-operator daemonset.apps/nvidia-container-toolkit-daemonset 2 2 2 2 2 nvidia.com/gpu.deploy.container-toolkit=true 4d6h
gpu-operator daemonset.apps/nvidia-dcgm-exporter 2 2 2 2 2 nvidia.com/gpu.deploy.dcgm-exporter=true 4d6h
gpu-operator daemonset.apps/nvidia-device-plugin-daemonset 2 2 2 2 2 nvidia.com/gpu.deploy.device-plugin=true 4d6h
gpu-operator daemonset.apps/nvidia-mig-manager 0 0 0 0 0 nvidia.com/gpu.deploy.mig-manager=true 4d6h
gpu-operator daemonset.apps/nvidia-operator-validator 2 2 2 2 2 nvidia.com/gpu.deploy.operator-validator=true 4d6h
jhub daemonset.apps/continuous-image-puller 2 2 2 2 2 <none> 148m
kube-flannel daemonset.apps/kube-flannel-ds 3 3 3 3 3 <none> 5d2h
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 5d2h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
gpu-operator deployment.apps/gpu-operator 1/1 1 1 4d6h
gpu-operator deployment.apps/gpu-operator-1673067409-node-feature-discovery-master 1/1 1 1 4d6h
jhub deployment.apps/hub 1/1 1 1 148m
jhub deployment.apps/proxy 1/1 1 1 148m
jhub deployment.apps/user-scheduler 2/2 2 2 148m
kube-system deployment.apps/coredns 2/2 2 2 5d2h
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 3d7h
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 3d7h
NAMESPACE NAME DESIRED CURRENT READY AGE
gpu-operator replicaset.apps/gpu-operator-1673067409-node-feature-discovery-master-564695f474 1 1 1 4d6h
gpu-operator replicaset.apps/gpu-operator-5c9754f456 1 1 1 4d6h
jhub replicaset.apps/hub-58678cff99 0 0 0 148m
jhub replicaset.apps/hub-589db65768 0 0 0 84m
jhub replicaset.apps/hub-5cbb65558b 0 0 0 89m
jhub replicaset.apps/hub-7cbf976b89 0 0 0 148m
jhub replicaset.apps/hub-7dbc7964d9 0 0 0 86m
jhub replicaset.apps/hub-b6b7f8b94 0 0 0 135m
jhub replicaset.apps/hub-ccd996468 1 1 1 83m
jhub replicaset.apps/proxy-598944d59c 0 0 0 148m
jhub replicaset.apps/proxy-86d6798858 0 0 0 148m
jhub replicaset.apps/proxy-9fcc96474 1 1 1 89m
jhub replicaset.apps/proxy-c68b44945 0 0 0 135m
jhub replicaset.apps/user-scheduler-5cb7469479 2 2 2 148m
kube-system replicaset.apps/coredns-787d4945fb 2 2 2 5d2h
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-5f8ccd7998 1 1 1 3d7h
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-bf69db5db 1 1 1 3d7h
NAMESPACE NAME READY AGE
jhub statefulset.apps/user-placeholder 0/0 148m
I am in real trouble. I would be very happy if you could help me