ZTJH on a Raspberry PI K8s Cluster

I’ve recently set up a 7-node K8s cluster on some RPi 3+s and was wondering what it would take to get ZTJH ported?

I’m new to K8s and JH, but willing to put the work in if someone can point me in the right direction.

Thanks!

1 Like

Kubernetes is an abstraction layer so Z2JH should just work with the correct configuration. The easiest option to get started is to switch off all the extra features like persistent storage. If you have problems post your full configuration and details of your setup here and someone might be able to help.

I tried using helm to install, but it failed.

First:

    pi@k8s-master-1:~ $ helm upgrade --install jhub jupyterhub/jupyterhub  \
    --namespace jhub --version=0.8.2 --values config.yaml 
    Release "jhub" does not exist. Installing it now.
    Error: create: failed to create: namespaces "jhub" not found.

Second try (left --namspace off):

pi@k8s-master-1:~ $ helm upgrade --install jhub jupyterhub/jupyterhub  --version=0.8.2 --values config.yaml
Release "jhub" does not exist. Installing it now.
Error: failed pre-install: timed out waiting for the condition
pi@k8s-master-1:~ $

Tried extending the time: I get the following error

pi@k8s-master-1:~ $ helm upgrade --install jhub jupyterhub/jupyterhub  --version=0.8.2 --values config.yaml --timeout=30m
Release "jhub" does not exist. Installing it now.
Error: failed pre-install: job failed: BackoffLimitExceeded
pi@k8s-master-1:~ $

This sounds like a very cool project!!

I would expect that helm would just create the namespace for you instead of giving an error. Could this be because the kubernetes “flavour” you use is somehow slimmed down/missing a feature?

What does kubectl get pods show you while the install command is running? BackoffLimitExceeded sounds like it is trying to run a particular pod (or something else) which keeps failing.

One thing that I don’t know but would expect “trouble” is that RPis are ARM based (I think) and I have no idea how many of our docker images would just work on an architecture that isn’t x86.

Which version of Helm are you using? Version 3 doesn’t create the namespace for you and isn’t officially supported by Z2JH (though I’m using it and it mostly works).

Can you also show us your config?

Here is the output of kubectl get pods --all-namespaces

NAMESPACE     NAME                                   READY   STATUS                  RESTARTS   AGE
default       hook-image-awaiter-58nkw               0/1     Error                   0          24h
default       hook-image-awaiter-5hqxg               0/1     Error                   0          24h
default       hook-image-awaiter-775x8               0/1     Error                   0          24h
default       hook-image-awaiter-cq2sc               0/1     Error                   0          24h
default       hook-image-awaiter-ctwsb               0/1     Error                   0          24h
default       hook-image-awaiter-czkcc               0/1     Error                   0          24h
default       hook-image-awaiter-lbwgs               0/1     Error                   0          24h
default       hook-image-puller-5htvp                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-98nsd                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-gjjrp                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-hgqcf                0/1     Init:CrashLoopBackOff   289        24h
default       hook-image-puller-lncll                0/1     Init:CrashLoopBackOff   290        24h
default       hook-image-puller-tgj56                0/1     Init:CrashLoopBackOff   290        24h
jhub          hook-image-awaiter-ctphk               0/1     Error                   0          24h
jhub          hook-image-awaiter-fvgtk               0/1     Error                   0          24h
jhub          hook-image-awaiter-hcm8h               0/1     Error                   0          24h
jhub          hook-image-awaiter-k9jzx               0/1     Error                   0          24h
jhub          hook-image-awaiter-sz44f               0/1     Error                   0          24h
jhub          hook-image-awaiter-t57lk               0/1     Error                   0          24h
jhub          hook-image-puller-4pfkb                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-7jvmk                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-7vrkk                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-8f8d7                0/1     Init:CrashLoopBackOff   287        24h
jhub          hook-image-puller-d48hg                0/1     Init:CrashLoopBackOff   286        24h
jhub          hook-image-puller-sdd5q                0/1     Init:CrashLoopBackOff   287        24h
kube-system   coredns-6955765f44-4tt86               1/1     Running                 0          2d1h
kube-system   coredns-6955765f44-rqnpm               1/1     Running                 0          2d1h
kube-system   etcd-k8s-master-1                      1/1     Running                 0          2d1h
kube-system   kube-apiserver-k8s-master-1            1/1     Running                 0          2d1h
kube-system   kube-controller-manager-k8s-master-1   1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-ckvm6              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-fk4nw              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-h8kxh              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-ndzmf              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-pvdtj              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-svw4v              1/1     Running                 0          2d1h
kube-system   kube-flannel-ds-arm-xlds6              1/1     Running                 0          2d1h
kube-system   kube-proxy-87zsd                       1/1     Running                 0          2d1h
kube-system   kube-proxy-c4wtw                       1/1     Running                 0          2d1h
kube-system   kube-proxy-c66sl                       1/1     Running                 0          2d1h
kube-system   kube-proxy-gt25q                       1/1     Running                 0          2d1h
kube-system   kube-proxy-kkltk                       1/1     Running                 0          2d1h
kube-system   kube-proxy-rjv4z                       1/1     Running                 0          2d1h
kube-system   kube-proxy-v8rz7                       1/1     Running                 0          2d1h
kube-system   kube-scheduler-k8s-master-1            1/1     Running                 0          2d1h

I’m using Helm 3. I tried creating the namespace prior to running helm but got the same results.

My config is pretty basic:

proxy:
  secretToken: "<SECRET>"

I agree that it is probably an ARM vs AMD64 issue. It appears the docker image for jupyterhub/jupyterhub is only for AMD64.

By the way, I did a “helm uninstall”, but it didn’t get rid off the pods. tried a kubectl delete pod, but they keep getting recreated.

Could you try kubectl describe pod $NAME for some of those pods e.g. hook-image-awaiter-58nkw, hook-image-puller-5htvp?

pi@k8s-master-1:~ $ kubectl describe pod hook-image-awaiter-h72s8 --namespace jhub
Name:         hook-image-awaiter-h72s8
Namespace:    jhub
Priority:     0
Node:         k8s-node-1/10.0.3.239
Start Time:   Mon, 13 Jan 2020 00:31:13 +0000
Labels:       app=jupyterhub
              component=image-puller
              controller-uid=43115d6d-8e22-4d61-a3a7-24b624f4dc95
              job-name=hook-image-awaiter
              release=jhub
Annotations:  <none>
Status:       Failed
IP:           10.244.1.8
IPs:
  IP:           10.244.1.8
Controlled By:  Job/hook-image-awaiter
Containers:
  hook-image-awaiter:
    Container ID:  docker://863930c6cbddb18c439289a47cb34784ab2bd7e8fa841a8c251ed583960e0101
    Image:         jupyterhub/k8s-image-awaiter:0.8.2
    Image ID:      docker-pullable://jupyterhub/k8s-image-awaiter@sha256:9103869ffc258ce12bcdcc3461a4c4d9896c6f46dbc1c28cf4442e8ae82e4d2a
    Port:          <none>
    Host Port:     <none>
    Command:
      /image-awaiter
      -ca-path=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      -auth-token-path=/var/run/secrets/kubernetes.io/serviceaccount/token
      -api-server-address=https://$(KUBERNETES_SERVICE_HOST):$(KUBERNETES_SERVICE_PORT)
      -namespace=jhub
      -daemonset=hook-image-puller
    State:          Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 13 Jan 2020 00:31:21 +0000
      Finished:     Mon, 13 Jan 2020 00:31:21 +0000
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from hook-image-awaiter-token-8wbb7 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  hook-image-awaiter-token-8wbb7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hook-image-awaiter-token-8wbb7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                 Message
  ----    ------     ----  ----                 -------
  Normal  Scheduled  116s  default-scheduler    Successfully assigned jhub/hook-image-awaiter-h72s8 to k8s-node-1
  Normal  Pulled     113s  kubelet, k8s-node-1  Container image "jupyterhub/k8s-image-awaiter:0.8.2" already present on machine
  Normal  Created    108s  kubelet, k8s-node-1  Created container hook-image-awaiter
  Normal  Started    108s  kubelet, k8s-node-1  Started container hook-image-awaiter
pi@k8s-master-1:~ $

And:

pi@k8s-master-1:~ $ kubectl describe pod hook-image-puller-2d4mq --namespace jhub
Name:         hook-image-puller-2d4mq
Namespace:    jhub
Priority:     0
Node:         k8s-node-3/10.0.3.237
Start Time:   Mon, 13 Jan 2020 00:28:59 +0000
Labels:       app=jupyterhub
              component=hook-image-puller
              controller-revision-hash=69979df87d
              pod-template-generation=1
              release=jhub
Annotations:  <none>
Status:       Pending
IP:           10.244.3.3
IPs:
  IP:           10.244.3.3
Controlled By:  DaemonSet/hook-image-puller
Init Containers:
  image-pull-singleuser:
    Container ID:  docker://1d7718d8a100f4a772500f02d71fa4ffc4513374390ba15a6de4cc21d5efc25f
    Image:         jupyterhub/k8s-singleuser-sample:0.8.2
    Image ID:      docker-pullable://jupyterhub/k8s-singleuser-sample@sha256:a9a5825cf52e6258e02846591fe6e0a945ac9ab22942624465e2eee2feefcb7d
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 13 Jan 2020 00:31:58 +0000
      Finished:     Mon, 13 Jan 2020 00:31:58 +0000
    Ready:          False
    Restart Count:  5
    Environment:    <none>
    Mounts:         <none>
  image-pull-metadata-block:
    Container ID:
    Image:         jupyterhub/k8s-network-tools:0.8.2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c
      echo "Pulling complete"
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Containers:
  pause:
    Container ID:
    Image:          gcr.io/google_containers/pause:3.0
    Image ID:
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:            <none>
QoS Class:          BestEffort
Node-Selectors:     <none>
Tolerations:        hub.jupyter.org/dedicated=user:NoSchedule
                    hub.jupyter.org_dedicated=user:NoSchedule
                    node.kubernetes.io/disk-pressure:NoSchedule
                    node.kubernetes.io/memory-pressure:NoSchedule
                    node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/pid-pressure:NoSchedule
                    node.kubernetes.io/unreachable:NoExecute
                    node.kubernetes.io/unschedulable:NoSchedule
Events:
  Type     Reason     Age                    From                 Message
  ----     ------     ----                   ----                 -------
  Normal   Scheduled  5m45s                  default-scheduler    Successfully assigned jhub/hook-image-puller-2d4mq to k8s-node-3
  Normal   Pulled     4m16s (x5 over 5m43s)  kubelet, k8s-node-3  Container image "jupyterhub/k8s-singleuser-sample:0.8.2" already present on machine
  Normal   Created    4m15s (x5 over 5m42s)  kubelet, k8s-node-3  Created container image-pull-singleuser
  Normal   Started    4m15s (x5 over 5m41s)  kubelet, k8s-node-3  Started container image-pull-singleuser
  Warning  BackOff    38s (x26 over 5m38s)   kubelet, k8s-node-3  Back-off restarting failed container
pi@k8s-master-1:~ $

Hello guys,
at the uni I have a similar project like njohnsn. Unfortunately I can’t get JupyterHub installed either.
The hook-image-puller stays in the status “Init:CrashLoopBackOff” just after the installation starts.
My cluster consists of 4 RaspberryPI 4B (8GB RAM), 2 of them are master nodes. As operating system I use Ubuntu 20.04.1 LTS.
I have deployed my cluster with Raspbernetes https://github.com/raspbernetes/k8s-cluster-installation

Does anyone have any ideas why we have this issue?
I’m a newcomer to the Kubrnetes universe, but I still suspect that something basic is missing.

I would be very grateful for any help!

Hello,

I was hoping to setup a JH demo environment on my home RPI K3S cluster. I ran into the same issue as people on this thread seem to be having. Then I had a face palm moment.

Unless I am mistaken the jupyterhub repo doesn’t contain arm arch images. Take a look at Docker Hub. All I see is amd64. I am not sure if I am missing something but I don’t think this will work on pis unless there is a different repo out there with arm images.

2 Likes

You’re correct, only amd64 docker images are currently built. You can see their definitions in

for example

but note that chartpress is used to build them with parameters along with the corresponding Helm chart.

In principle chartpress could be modified to build images for multiple architectures, but presumably this would require full stack support for those architectures. Have you looked at whether it’s feasible to rebuild any of the above images to arm?

1 Like

I tried to rebuild arm64 docker images required for ZTJH and succeded.
Works on RPi4-Ubuntu 20.04 arch arm64.
If you want to test my build, use my repository as follows:

$ helm repo add my-jupyterhub https://sakuraiyuta.github.io/helm-chart/
$ helm repo update
$ helm upgrade --cleanup-on-fail --install jhub my-jupyterhub/jupyterhub --namespace jupyterhub --create-namespace --version 0.11.1-n280.h5fc417ef --values config.yaml # you need to create config.yaml and configure your kubectl like as installing on amd64.

If you want to check source, see below:

Helm Repository: GitHub - sakuraiyuta/helm-chart at gh-pages
ZTJH source: GitHub - sakuraiyuta/zero-to-jupyterhub-k8s at fix/arm64
Jupyter base-notebook source: GitHub - sakuraiyuta/docker-stacks at fix/arm64
(Sorry, rejected putting multiple links.)
(edit from @betatim: used my admin powers to turn those links into links)

Any issues and PR are welcome.

Finally, it’s experimental and my private repository so may change without notice.

2 Likes

The current Z2JH Docker images are built using chartpress running in a GitHub workflow

@sakuraiyuta Do you know if there’s a way to build arm64 Docker images in a GitHub workflow?

1 Like

@manics Thank you for reply.

Sorry, I’m not familier with building docker and GitHub workflow.
I’ll check your snippets, and research better way. Thank you.

Z2JH on RPi4(4x nodes) seems fast except to creating user pod, not bad, so may continue to operation the environment.

@manics For your advice, I added GitHub workflow/action settings, and now can build Z2JH for aarch64(arm64) architecture and release to my repository, on GitHub Actions when git-pushed.
(not fix test-chart.yaml yet, so GH Action reported test-chart error.)

Workflow yml for building arm64 image is here:

(Notice: this file contains some settings for publishing my private repository.)

There are many technical issues, and unknown behavior remained(cause by my ignorant), but can deploy and works on my RPi cluster.

e.g. Official GitHub Actions are support building multi-architecture docker image using buildx… but jupyter/docker-stacks and Z2JH are using other build system(make, and chartpress).
For that reason, I have to need take other solution… it’s “build docker images, on arm64 docker image on GH Actions.”.
Seems ridiculous? Any solutions are welcome.

I hope that this thread as a trigger, Z2JH arm64 architecture support will release.

1 Like

UPDATE: I squashed git-commits so previous comments are outdated.

Now you can try Z2JH on Ubuntu-20.04 Server aarch64(arm64) using tag “0.11.1-aarch64”.

Install:

$ helm repo add my-jupyterhub https://sakuraiyuta.github.io/helm-chart/
$ helm repo update
$ helm upgrade \
      --cleanup-on-fail \
      --install jhub my-jupyterhub/jupyterhub \
      --namespace jupyterhub \
      --create-namespace \
      --version 0.11.1-aarch64 \
      --values config.yaml

Z2JH source:

Optionally you can select some jupyter-notebook docker images for aarch64.
Now available:

  • base-notebook
  • minimal-notebook
  • r-notebook
  • scipy-notebook
  • datascience-notebook
  • pyspark-notebook
  • all-spark-notebook
    (sorry, some images not tested enough yet)

Configuration example:

proxy:
  secretToken: "# create token using: `openssl rand -hex 32`"
hub:
  config:
    GoogleOAuthenticator:
      client_id: #secret
      client_secret: #secret
      oauth_callback_url: https://your.domain/hub/oauth_callback
      hosted_domain:
        - example.com
      login_service: Your Organization
    JupyterHub:
      authenticator_class: google
singleuser:
  image:
    name: sakuraiyuta/base-notebook
    tag: latest
  profileList:
    - display_name: "default(base-notebook)"
      description: "desc"
      default: true
    - display_name: "minimal-notebook"
      description: "desc"
      kubespawner_override:
        image: sakuraiyuta/minimal-notebook:latest
    - display_name: "scipy-notebook"
      description: "desc"
      kubespawner_override:
        image: sakuraiyuta/scipy-notebook:latest
    - display_name: "r-notebook"
      description: "desc"
      kubespawner_override:
        image: sakuraiyuta/r-notebook:latest
    - display_name: "datascience-notebook"
      description: "desc"
      kubespawner_override:
        image: sakuraiyuta/datascience-notebook:latest
    - display_name: "all-spark-notebook"
      description: "desc"
      kubespawner_override:
        image: sakuraiyuta/all-spark-notebook:latest

Jupyter-Notebook docker stacks source:

BTW: I wish Jupyter and Z2JH maintainer officially support aarch64.
I’ll create PR if maintainer team has interested.

2 Likes

Thanks for all this work. I’ve opened a Z2JH issue to discuss how to go about supporting ARM64 without adding too much of a maintenance burden. Feel free to add any comments there.

2 Likes

It’s great news!
I say very thank you that maintainer start to consider supporting arm64/aarch64 architecture.
I think that the discussion seems goes for the better.
I’ll comment in GH issue if you need my findings.

1 Like

Thanks for the effort you have put into this everyone! @manics made a push to make z2jh support arm64 based images that are now published, so at this point it would be great to have feedback if it seem to work properly.

Z2JH version 0.11.1-n393.h2aa513d9 and later is supposed to reference arm64 compatible images, except for the user image, which still isn’t arm64 compatible.

So, if you have time to test the z2jh official images, you still need to override singleuser.image.name and singleuser.image.tag with your own arm64 compatible user image. This can be done for example using --set singleuser.image.name=sakuraiyuta/base-notebook,singleuser.image.tag=latest in the helm command.

If you can verify the official images works / doesn’t work, that would be great!

1 Like