ZTJH on a Raspberry PI K8s Cluster

I’ve recently set up a 7-node K8s cluster on some RPi 3+s and was wondering what it would take to get ZTJH ported?

I’m new to K8s and JH, but willing to put the work in if someone can point me in the right direction.

Thanks!

1 Like

Kubernetes is an abstraction layer so Z2JH should just work with the correct configuration. The easiest option to get started is to switch off all the extra features like persistent storage. If you have problems post your full configuration and details of your setup here and someone might be able to help.

I tried using helm to install, but it failed.

First:

    pi@k8s-master-1:~ $ helm upgrade --install jhub jupyterhub/jupyterhub  \
    --namespace jhub --version=0.8.2 --values config.yaml 
    Release "jhub" does not exist. Installing it now.
    Error: create: failed to create: namespaces "jhub" not found.

Second try (left --namspace off):

pi@k8s-master-1:~ $ helm upgrade --install jhub jupyterhub/jupyterhub  --version=0.8.2 --values config.yaml
Release "jhub" does not exist. Installing it now.
Error: failed pre-install: timed out waiting for the condition
pi@k8s-master-1:~ $

Tried extending the time: I get the following error

pi@k8s-master-1:~ $ helm upgrade --install jhub jupyterhub/jupyterhub  --version=0.8.2 --values config.yaml --timeout=30m
Release "jhub" does not exist. Installing it now.
Error: failed pre-install: job failed: BackoffLimitExceeded
pi@k8s-master-1:~ $

This sounds like a very cool project!!

I would expect that helm would just create the namespace for you instead of giving an error. Could this be because the kubernetes “flavour” you use is somehow slimmed down/missing a feature?

What does kubectl get pods show you while the install command is running? BackoffLimitExceeded sounds like it is trying to run a particular pod (or something else) which keeps failing.

One thing that I don’t know but would expect “trouble” is that RPis are ARM based (I think) and I have no idea how many of our docker images would just work on an architecture that isn’t x86.

Which version of Helm are you using? Version 3 doesn’t create the namespace for you and isn’t officially supported by Z2JH (though I’m using it and it mostly works).

Can you also show us your config?

Here is the output of kubectl get pods --all-namespaces

NAMESPACE     NAME                                   READY   STATUS                  RESTARTS   AGE
default       hook-image-awaiter-58nkw               0/1     Error                   0          24h
default       hook-image-awaiter-5hqxg               0/1     Error                   0          24h
default       hook-image-awaiter-775x8               0/1     Error                   0          24h
default       hook-image-awaiter-cq2sc               0/1     Error                   0          24h
default       hook-image-awaiter-ctwsb               0/1     Error                   0          24h
default       hook-image-awaiter-czkcc               0/1     Error                   0          24h
default       hook-image-awaiter-lbwgs               0/1     Error                   0          24h
default       hook-image-puller-5htvp                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-98nsd                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-gjjrp                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-hgqcf                0/1     Init:CrashLoopBackOff   289        24h
default       hook-image-puller-lncll                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-tgj56                0/1     Init:CrashLoopBackOff   290        24h
jhub          hook-image-awaiter-ctphk               0/1     Error                   0          24h
jhub          hook-image-awaiter-fvgtk               0/1     Error                   0          24h
jhub          hook-image-awaiter-hcm8h               0/1     Error                   0          24h
jhub          hook-image-awaiter-k9jzx               0/1     Error                   0          24h
jhub          hook-image-awaiter-sz44f               0/1     Error                   0          24h
jhub          hook-image-awaiter-t57lk               0/1     Error                   0          24h
jhub          hook-image-puller-4pfkb                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-7jvmk                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-7vrkk                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-8f8d7                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-d48hg                0/1     Init:CrashLoopBackOff   286        24h
jhub          hook-image-puller-sdd5q                0/1     Init:CrashLoopBackOff   287        24h
kube-system   coredns-6955765f44-4tt86               1/1     Running                 0          2d1h
kube-system   coredns-6955765f44-rqnpm               1/1     Running                 0          2d1h
kube-system   etcd-k8s-master-1                      1/1     Running                 0          2d1h
kube-system   kube-apiserver-k8s-master-1            1/1     Running                 0          2d1h
kube-system   kube-controller-manager-k8s-master-1   1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-ckvm6              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-fk4nw              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-h8kxh              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-ndzmf              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-pvdtj              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-svw4v              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-xlds6              1/1     Running                 0          2d1h
kube-system   kube-proxy-87zsd                       1/1     Running                 0          2d1h
kube-system   kube-proxy-c4wtw                       1/1     Running                 0          2d1h
kube-system   kube-proxy-c66sl                       1/1     Running                 0          2d1h
kube-system   kube-proxy-gt25q                       1/1     Running                 0          2d1h
kube-system   kube-proxy-kkltk                       1/1     Running                 0          2d1h
kube-system   kube-proxy-rjv4z                       1/1     Running                 0          2d1h
kube-system   kube-proxy-v8rz7                       1/1     Running                 0          2d1h
kube-system   kube-scheduler-k8s-master-1            1/1     Running                 0          2d1h

I’m using Helm 3. I tried creating the namespace prior to running helm but got the same results.

My config is pretty basic:

proxy:
  secretToken: "<SECRET>"

I agree that it is probably an ARM vs AMD64 issue. It appears the docker image for jupyterhub/jupyterhub is only for AMD64.

By the way, I did a “helm uninstall”, but it didn’t get rid off the pods. tried a kubectl delete pod, but they keep getting recreated.

Could you try kubectl describe pod $NAME for some of those pods e.g. hook-image-awaiter-58nkw, hook-image-puller-5htvp?

pi@k8s-master-1:~ $ kubectl describe pod hook-image-awaiter-h72s8 --namespace jhub
Name:         hook-image-awaiter-h72s8
Namespace:    jhub
Priority:     0
Node:         k8s-node-1/10.0.3.239
Start Time:   Mon, 13 Jan 2020 00:31:13 +0000
Labels:       app=jupyterhub
              component=image-puller
              controller-uid=43115d6d-8e22-4d61-a3a7-24b624f4dc95
              job-name=hook-image-awaiter
              release=jhub
Annotations:  <none>
Status:       Failed
IP:           10.244.1.8
IPs:
  IP:           10.244.1.8
Controlled By:  Job/hook-image-awaiter
Containers:
  hook-image-awaiter:
    Container ID:  docker://863930c6cbddb18c439289a47cb34784ab2bd7e8fa841a8c251ed583960e0101
    Image:         jupyterhub/k8s-image-awaiter:0.8.2
    Image ID:      docker-pullable://jupyterhub/k8s-image-awaiter@sha256:9103869ffc258ce12bcdcc3461a4c4d9896c6f46dbc1c28cf4442e8ae82e4d2a
    Port:          <none>
    Host Port:     <none>
    Command:
      /image-awaiter
      -ca-path=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      -auth-token-path=/var/run/secrets/kubernetes.io/serviceaccount/token
      -api-server-address=https://$(KUBERNETES_SERVICE_HOST):$(KUBERNETES_SERVICE_PORT)
      -namespace=jhub
      -daemonset=hook-image-puller
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 13 Jan 2020 00:31:21 +0000
      Finished:     Mon, 13 Jan 2020 00:31:21 +0000
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from hook-image-awaiter-token-8wbb7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  hook-image-awaiter-token-8wbb7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hook-image-awaiter-token-8wbb7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                 Message
  ----    ------     ----  ----                 -------
  Normal  Scheduled  116s  default-scheduler    Successfully assigned jhub/hook-image-awaiter-h72s8 to k8s-node-1
  Normal  Pulled     113s  kubelet, k8s-node-1  Container image "jupyterhub/k8s-image-awaiter:0.8.2" already present on machine
  Normal  Created    108s  kubelet, k8s-node-1  Created container hook-image-awaiter
  Normal  Started    108s  kubelet, k8s-node-1  Started container hook-image-awaiter
pi@k8s-master-1:~ $

And:

pi@k8s-master-1:~ $ kubectl describe pod hook-image-puller-2d4mq --namespace jhub
Name:         hook-image-puller-2d4mq
Namespace:    jhub
Priority:     0
Node:         k8s-node-3/10.0.3.237
Start Time:   Mon, 13 Jan 2020 00:28:59 +0000
Labels:       app=jupyterhub
              component=hook-image-puller
              controller-revision-hash=69979df87d
              pod-template-generation=1
              release=jhub
Annotations:  <none>
Status:       Pending
IP:           10.244.3.3
IPs:
  IP:           10.244.3.3
Controlled By:  DaemonSet/hook-image-puller
Init Containers:
  image-pull-singleuser:
    Container ID:  docker://1d7718d8a100f4a772500f02d71fa4ffc4513374390ba15a6de4cc21d5efc25f
    Image:         jupyterhub/k8s-singleuser-sample:0.8.2
    Image ID:      docker-pullable://jupyterhub/k8s-singleuser-sample@sha256:a9a5825cf52e6258e02846591fe6e0a945ac9ab22942624465e2eee2feefcb7d
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 13 Jan 2020 00:31:58 +0000
      Finished:     Mon, 13 Jan 2020 00:31:58 +0000
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:         <none>
  image-pull-metadata-block:
    Container ID:
    Image:         jupyterhub/k8s-network-tools:0.8.2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Containers:
  pause:
    Container ID:
    Image:          gcr.io/google_containers/pause:3.0
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:            <none>
QoS Class:          BestEffort
Node-Selectors:     <none>
Tolerations:        hub.jupyter.org/dedicated=user:NoSchedule
                    hub.jupyter.org_dedicated=user:NoSchedule
                    node.kubernetes.io/disk-pressure:NoSchedule
                    node.kubernetes.io/memory-pressure:NoSchedule
                    node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/pid-pressure:NoSchedule
                    node.kubernetes.io/unreachable:NoExecute
                    node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                    From                 Message
  ----     ------     ----                   ----                 -------
  Normal   Scheduled  5m45s                  default-scheduler    Successfully assigned jhub/hook-image-puller-2d4mq to k8s-node-3
  Normal   Pulled     4m16s (x5 over 5m43s)  kubelet, k8s-node-3  Container image "jupyterhub/k8s-singleuser-sample:0.8.2" already present on machine
  Normal   Created    4m15s (x5 over 5m42s)  kubelet, k8s-node-3  Created container image-pull-singleuser
  Normal   Started    4m15s (x5 over 5m41s)  kubelet, k8s-node-3  Started container image-pull-singleuser
  Warning  BackOff    38s (x26 over 5m38s)   kubelet, k8s-node-3  Back-off restarting failed container
pi@k8s-master-1:~ $