Thanks for everyone’s effort to bring ZTJH on a Raspberry Pi cluster. I’m having an issue where the hub-* pod is stuck in the Pending status.
$ kubectl get pod --namespace jhub
NAME READY STATUS RESTARTS AGE
continuous-image-puller-2pb8r 1/1 Running 0 14s
continuous-image-puller-nvqx8 1/1 Running 0 14s
continuous-image-puller-xsnqg 1/1 Running 0 14s
hub-66fc775ff5-rt57l 0/1 Pending 0 14s
proxy-5f7f7545d9-pmcj6 1/1 Running 0 22m
user-scheduler-d5b574c56-p897k 1/1 Running 0 22m
user-scheduler-d5b574c56-wqqqd 1/1 Running 0 22m
manics
April 13, 2021, 7:06pm
22
Hi! Please could you give us:
Full details about how you setup kubernetes
Your Z2JH config file
Output of kubectl describe deploy/hub
Output og kubectl describe pod <name of your hub pod>
If the problem is related to resources that aren’t available in a minimal Kubernetes distribution you could try this config just for testing:
hub:
db:
type: sqlite-memory
proxy:
service:
type: NodePort
singleuser:
storage:
type: none
image:
name: sakuraiyuta/base-notebook
tag: latest
1 Like
I used the instructions I found on Build a Kubernetes cluster with the Raspberry Pi | Opensource.com . My my contol-plane,master is running Ubuntu 20.10 Desktop and I have 3 additional nodes running Ubuntu 20.10 Server.
My ZTJH config file is:
proxy:
secretToken: "<RANDOM_HEX>"
singleuser:
image:
name: sakuraiyuta/minimal-notebook
tag: latest
My deployment is:
$ kubectl describe deploy/hub --namespace jhub
Name: hub
Namespace: jhub
CreationTimestamp: Tue, 13 Apr 2021 10:27:43 -0700
Labels: app=jupyterhub
app.kubernetes.io/managed-by=Helm
chart=jupyterhub-0.11.1-n393.h2aa513d9
component=hub
heritage=Helm
release=jhub
Annotations: deployment.kubernetes.io/revision: 2
meta.helm.sh/release-name: jhub
meta.helm.sh/release-namespace: jhub
Selector: app=jupyterhub,component=hub,release=jhub
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
release=jhub
Annotations: checksum/config-map: eef9a4c7628312ea595fdeb137a5556afc598ff8b9bd507fac80362b11f41dee
checksum/secret: 2ea3be2953b23720d441d9eeefcb7801584fc4dd57eaf98fc0377386affaf473
Service Account: hub
Containers:
hub:
Image: jupyterhub/k8s-hub:0.11.1-n392.h6be4ace0
Port: 8081/TCP
Host Port: 0/TCP
Args:
jupyterhub
--config
/usr/local/etc/jupyterhub/jupyterhub_config.py
--upgrade-db
Liveness: http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
Readiness: http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'> Optional: false
Mounts:
/srv/jupyterhub from pvc (rw)
/usr/local/etc/jupyterhub/config/ from config (rw)
/usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
/usr/local/etc/jupyterhub/secret/ from secret (rw)
/usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub
Optional: false
pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing False ProgressDeadlineExceeded
OldReplicaSets: <none>
NewReplicaSet: hub-66fc775ff5 (1/1 replicas created)
Events: <none>
Splitting this up because the system says I’m only allowed to post 2 links.
My pod is:
$ kubectl describe pod hub-66fc775ff5-rt57l --namespace jhub
Name: hub-66fc775ff5-rt57l
Namespace: jhub
Priority: 0
Node: <none>
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=66fc775ff5
release=jhub
Annotations: checksum/config-map: eef9a4c7628312ea595fdeb137a5556afc598ff8b9bd507fac80362b11f41dee
checksum/secret: 2ea3be2953b23720d441d9eeefcb7801584fc4dd57eaf98fc0377386affaf473
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/hub-66fc775ff5
Containers:
hub:
Image: jupyterhub/k8s-hub:0.11.1-n392.h6be4ace0
Port: 8081/TCP
Host Port: 0/TCP
Args:
jupyterhub
--config
/usr/local/etc/jupyterhub/jupyterhub_config.py
--upgrade-db
Liveness: http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
Readiness: http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: jhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'> Optional: false
Mounts:
/srv/jupyterhub from pvc (rw)
/usr/local/etc/jupyterhub/config/ from config (rw)
/usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
/usr/local/etc/jupyterhub/secret/ from secret (rw)
/usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7cws (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub
Optional: false
pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
kube-api-access-b7cws:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: hub.jupyter.org/dedicated=core:NoSchedule
hub.jupyter.org_dedicated=core:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 156m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.
GeorL
April 14, 2021, 5:33pm
25
Hello all,
thanks to everyone for the effort!
I faced two issues when deploying JupyterHub on my RPi cluster :
The Pod with the hub is in pending status because it waits until the PVC (Persistent Volume Claim) is satisfied:
@ccordero5500
“Warning Failed Scheduling 156m default-scheduler 0/4 nodes are available: 4 pod has unbound immediate PersistentVolumeClaims.”
To create a suitable local Persistent Volume you can use:
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-consul-pv0
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
claimRef:
namespace: jupyterhub
name: hub-db-dir
hostPath:
path: "/mnt/data"
The proxy-public (load balancer) does not get an EXTERNAL IP to access the hub. As far as I understand the k8s object load balancer, the EXTERNAL IP comes from the cloud provider.
This can be worked around by installing MetalLB (MetalLB, bare metal load-balancer for Kubernetes ). Works just fine.
However the pod with the Hub is in CrashLoopBackOff state. The pod description says:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 24m default-scheduler Successfully assigned jupyterhub/hub-755965b5ff-ztd4m to k8s-node-02
Normal Pulled 23m (x4 over 24m) kubelet Container image "sakuraiyuta/jupyterhub-k8s-hub:0.11.1-aarch64" already present on machine
Normal Created 23m (x4 over 24m) kubelet Created container hub
Normal Started 23m (x4 over 24m) kubelet Started container hub
Warning Unhealthy 23m (x6 over 24m) kubelet Readiness probe failed: Get "http://10.244.2.50:8081/hub/health": dial tcp 10.244.2.50:8081: connect: connection refused
Warning BackOff 4m37s (x98 over 24m) kubelet Back-off restarting failed container
manics
April 15, 2021, 8:15pm
26
This indicates a problem with your K8s storage. Could you try the minimal configuration from
which disables all storage and uses a NodePort. If that works you can start adding features back in.
@GeorL what does kubectl describe
show for your hub and proxy pods? Would you also mind trying the above minimal configuration?
2 Likes