Hello,
I followed all the instructions found on setting up Binderhub.
helm:
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
I get the following error:
kubectl describe pods hub-5f7df5ff78-7w5nq --namespace core2test
Name: hub-5f7df5ff78-7w5nq
Namespace: core2test
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=5f7df5ff78
release=core2test
Annotations: checksum/config-map: b913a781c711d09d80081b7586eaca6fd730af6f123af91e1e79ed57d0dec4f7
checksum/secret: 5db693b463b61dc773e16b8540f1b872423798f99e497dbacef42f6543989cc5
Status: Pending
IP:
Controlled By: ReplicaSet/hub-5f7df5ff78
Containers:
hub:
Image: jupyterhub/k8s-hub:0.9-1d2e51b
Port: 8081/TCP
Host Port: 0/TCP
Command:
jupyterhub
--config
/srv/jupyterhub_config.py
--upgrade-db
Requests:
cpu: 200m
memory: 512Mi
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: core2test
POD_NAMESPACE: core2test (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/etc/jupyterhub/secret/ from secret (rw)
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-6rxvw (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub-secret
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-6rxvw:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-6rxvw
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 47s (x16 over 17m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Anyone have experienced the same problem? and how did you guys fix it?