Hello, trying to deploy binderhub to AWS EKS cluster following this guide.
The binder and hub pods are running and I can visit the application in the browser. Now, the issue is that the jupyterhub-image-cleaner
pods are stuck on CreatingContainer
status due to that docker
is not installed.
The eks AMI uses containerd runtime, when I try to create and use a custom AMI with docker installed (base AMI is the eks optimized AMI), the aws-nodes
keeps restarting maybe because of conflict of containerd
and docker
runtime.
My question is that, is there a solution or config on how to configure the jupyterhub-image-cleaner
to use containerd
instead of docker
?
Here’s the result when I describe the pod.
❯ kubectl describe pods/jupyterhub-image-cleaner-7jff4 -n jupyterhub
Name: jupyterhub-image-cleaner-7jff4
Namespace: jupyterhub
Priority: 0
Service Account: jupyterhub-image-cleaner
Node: ip-10-4-1-87.ap-southeast-2.compute.internal/10.4.1.87
Start Time: Wed, 12 Mar 2025 07:16:08 +0800
Labels: app=binder
component=image-cleaner
controller-revision-hash=698fbdd759
heritage=Helm
name=jupyterhub-image-cleaner
pod-template-generation=1
release=jupyterhub
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: DaemonSet/jupyterhub-image-cleaner
Containers:
image-cleaner-host:
Container ID:
Image: quay.io/jupyterhub/docker-image-cleaner:1.0.0-beta.3
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
DOCKER_IMAGE_CLEANER_NODE_NAME: (v1:spec.nodeName)
DOCKER_IMAGE_CLEANER_PATH_TO_CHECK: /var/lib/host
DOCKER_IMAGE_CLEANER_DELAY_SECONDS: 5
DOCKER_IMAGE_CLEANER_THRESHOLD_TYPE: relative
DOCKER_IMAGE_CLEANER_THRESHOLD_HIGH: 80
DOCKER_IMAGE_CLEANER_THRESHOLD_LOW: 60
Mounts:
/var/lib/host from storage-host (rw)
/var/run/docker.sock from socket-host (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cxk4m (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
storage-host:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker
HostPathType:
socket-host:
Type: HostPath (bare host directory volume)
Path: /var/run/docker.sock
HostPathType: Socket
kube-api-access-cxk4m:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: hub.jupyter.org/dedicated=user:NoSchedule
hub.jupyter.org_dedicated=user:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m11s default-scheduler Successfully assigned jupyterhub/jupyterhub-image-cleaner-7jff4 to ip-10-4-1-87.ap-southeast-2.compute.internal
Warning FailedMount 3s (x9 over 2m11s) kubelet MountVolume.SetUp failed for volume "socket-host" : hostPath type check failed: /var/run/docker.sock is not a socket file
Hey @rodentskie ,
You could try using imageBuilderType: dind
(for docker-in-docker) at the root of your BinderHub config file. This seemed to do the trick for me on EKS as well. Looks like it tells both the image cleaners and build pods to use dind and avoid looking for the docker socket.
Best,