Speeding up "time to developing"

Frustrated with how long it takes minikube start to run just to do some development on BinderHub (and because it has been raining all day) I investigated k3s and k3d. k3s is a kubernetes distribution with the tag line “k3s - five less than eight”, small, fast, simple. There are several things it can’t do but in exchange it is meant to be super lightweight.

k3d is a tool that lets you run k3s inside a docker container. Hello kubernetes inception. A kubernetes cluster that runs in a docker container and then runs docker containers inside that cluster. I tried this because I was never quite sure what would happen if I followed the k3s instructions (would it break my local setup?). Having it all in one docker container meant I felt like it was low risk to try it.

The goal of this exercise was to see if I could get BinderHub up and running by mostly following our contributing guide but without using minikube.


On OSX it was quick to install k3d with brew install k3d. After that I created a new cluster with k3d create --publish 8023:30123. This will expose the 30123 port from inside the cluster (a nodePort) as port 8023 on your localhost.

Follow the instructions this command prints and run export KUBECONFIG="$(k3d get-kubeconfig --name='k3s-default')". This will configure you kubectl to talk to the k3s cluster you just created.

Next we have to create a service account for tiller and give it cluster-admin rights:

kubectl --namespace kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller

Now install tiller in the cluster:

helm init --service-account tiller --wait

Install the JupyterHub:

./testing/minikube/install-hub

Once that completes you should have a JupyterHub up and running that BinderHub can talk to. Remember the --publish 8023:30123 part of the k3d command-line earlier? The JupyterHub proxy is configured to use a node port (30123) which we expose as 8023 on localhost. To check the service is up:

kubectl get services --all-namespaces

you should see a few rows, one of which looks like:

NAMESPACE     NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
binder-test   proxy-public    NodePort       10.43.19.47    <none>        80:30123/TCP,443:31774/TCP   31m

Three things to look for: the name of the service is proxy-public, it is of type NodePort and its port 80 is mapped to 30123.

Now from your laptop you should be able to run curl http://localhost:8023/hub/api/ and get a response like {"version": "1.0.0"}. This means you can talk to the JupyterHub inside the k3s cluster. (That is a JupyterHub running in a docker container in a kubernetes cluster that is running in a docker container. Must we go deeper? :wink: )

To start your BinderHub we need to edit testing/minikube/binderhub_config.py to change the IP and port on which the JupyterHub can be reached. Remove the lines related to getting the IP from minikube. Then edit the hub_url line to read c.BinderHub.hub_url = 'http://localhost:8023'.

Start the BinderHub with:

python3 -m binderhub -f testing/minikube/binderhub_config.py

Open http://localhost:8585 in your browser and enjoy your shiny new BinderHub. Unfortunately building new images doesn’t seem to work, so more work needed.

The error message is:

MountVolume.SetUp failed for volume “docker-socket” : hostPath type check failed: /var/run/docker.sock is not a socket file

which I think is because we can’t mount the docker socket from the host (which is a container running inside docker already).

6 Likes

I’ve played with k3s too, I’m really impressed with it’s quick setup time!

For your last point (mounting the docker socket) this issue is relevant https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/1225
I’ve been playing with Podman recently (which uses buildah under the hood) and trying to get repo2docker working with it.

2 Likes

I read that you can switch k3s to use docker instead of contaienrd, so I tried k3d create --publish 8023:30123 -v /var/run/docker.sock:/var/run/docker.sock -x "--docker" but now it seems that not even the core pods start. Needs more time to investigate/understand.

Unrelated, yet related: do you know if you can use k3d to simulate a “multi node” kubernetes cluster? That would be fantastic for testing/developing some of the scheduling cleverness in BinderHub.

1 Like

Kind supports multinode clusters, but making them autoscale etc may be another matter, and labeling them etc nicely also. Its a tough one.

Im working hard on CI stuff for z2jh atm. I reached test-success recently with new python-based scripts instead of the previous bash scripts i relied on.

Curious to learn about the viability of k3s!

2 Likes

Brief update on my stance, minikube rocks, kind is too immature and cause a bit much hassle for local development, while it may be superior for more advanced tests on k8s with its support of multi node clusters etc.

2 Likes