If you manually installed Kubernetes there’s a good chance you’re missing some standard components. Typical examples include storage controllers or load balancers. If this is the problem there are ways to work around their absence in Z2JH.
Can you check your pods with kubectl describe ... and paste the output, along with your full configuration file (with secrets redacted?)
If this is a single server deployment you might be better off using https://tljh.jupyter.org/ instead
Hi manics, thanks for the fast and useful answer! I might switch to tljh, but ultimately we are targeting a Kubernetes cluster of at least 3 machines, so I want to give this another try.
Update: I made some progress. All pods are now running. However the service proxy-public is still pending:
admin-nb@eo-dell-r7525a:~$ cat > storage_dynamic_fast.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
admin-nb@eo-dell-r7525a:~$ kubectl apply -f storage_dynamic_fast.yaml
storageclass.storage.k8s.io/fast created
admin-nb@eo-dell-r7525a:~$ vi config_JupyterHub_v2.yaml
admin-nb@eo-dell-r7525a:~$ cat config_JupyterHub_v2.yaml
# This file can update the JupyterHub Helm chart's default configuration values.
#
# For reference see the configuration reference and default values, but make
# sure to refer to the Helm chart version of interest to you!
#
# Introduction to YAML: https://www.youtube.com/watch?v=cdLNKUoMc6c
# Chart config reference: https://zero-to-jupyterhub.readthedocs.io/en/stable/resources/reference.html
# Chart default values: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/e14d686fea782482b1f7d388118bf772a6ab5be7/jupyterhub/values.yaml
# Available chart versions: https://jupyterhub.github.io/helm-chart/
#
#
## inspired from https://discourse.jupyter.org/t/problem-using-kubernetes-for-jupyterhub-on-a-local-infrastructure/369/8
## This portion is missing from the tutorial for anyone trying to setup on bare metal.
## dynamic "fast" memory created before
hub:
db:
type: sqlite-memory
singleuser:
storage:
type: dynamic
class: fast
##
admin-nb@eo-dell-r7525a:~$ helm upgrade --cleanup-on-fail --install jhub jupyterhub/jupyterhub --namespace jhub --create-namespace --version=0.11.1-n349.he14d686f --values config_JupyterHub_v2.yaml
Release "jhub" has been upgraded. Happy Helming!
NAME: jhub
LAST DEPLOYED: Tue Mar 30 15:34:04 2021
NAMESPACE: jhub
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
Thank you for installing JupyterHub!
Your release is named jhub and installed into the namespace jhub.
You can find if the hub and proxy is ready by doing:
kubectl --namespace=jhub get pod
and watching for both those pods to be in status 'Running'.
You can find the public IP of the JupyterHub by doing:
kubectl --namespace=jhub get svc proxy-public
It might take a few minutes for it to appear!
Note that this is still an alpha release! If you have questions, feel free to
1. Read the guide at https://z2jh.jupyter.org
2. Chat with us at https://gitter.im/jupyterhub/jupyterhub
3. File issues at https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues
admin-nb@eo-dell-r7525a:~$ kubectl get pods -n jhub
NAME READY STATUS RESTARTS AGE
continuous-image-puller-htgnj 1/1 Running 0 17m
hub-55875db7f5-nmpdd 1/1 Running 0 5m20s
proxy-7989b9cb88-4l79s 1/1 Running 0 5m20s
user-scheduler-7f59fc6f47-49dl6 1/1 Running 0 5m20s
user-scheduler-7f59fc6f47-6xtct 1/1 Running 0 5m20s
admin-nb@eo-dell-r7525a:~$ kubectl get services -n jhub
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.111.95.240 <none> 8081/TCP 5m26s
proxy-api ClusterIP 10.97.145.202 <none> 8001/TCP 5m26s
proxy-public LoadBalancer 10.104.148.191 <pending> 80:31200/TCP 5m26s
Your server is starting up.
You will be redirected automatically when it's ready for you.
50%Complete
2021-03-30T16:33:41.831375Z [Warning] 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Event log
Server requested
2021-03-30T16:33:41.823736Z [Warning] 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
2021-03-30T16:33:41.831375Z [Warning] 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
admin-nb@eo-dell-r7525a:~$ kubectl get pods -n jhub
NAME READY STATUS RESTARTS AGE
continuous-image-puller-htgnj 1/1 Running 0 72m
hub-55875db7f5-nmpdd 1/1 Running 0 60m
jupyter-nils 0/1 Pending 0 71s
proxy-7989b9cb88-4l79s 1/1 Running 0 60m
user-scheduler-7f59fc6f47-49dl6 1/1 Running 0 60m
user-scheduler-7f59fc6f47-6xtct 1/1 Running 0 60m
So, apparently I still have to dig deeper into the persistent storage subject.
How did you solve the PersistenVolume issue for your hub?
By default Z2JH will dynamically create a PersistentVolume for each user. See Dynamic Volume Provisioning | Kubernetes if you’re not familiar with dynamic provisioning of storage.
Since it’s working that’s great. In case it’s helpful to you in future, an alternative is to install an ingress controller Advanced Topics — Zero to JupyterHub with Kubernetes documentation
This works where the only public resources are web-services since the ingress can reverse proxy multiple webservices.
Hi, I decided to go for NFS storage. Unfortunately, one recipe was outdated (using some beta API). The second one seemed to go through, but didn´t actually allocate a volume in my attempt (PVC pending): Provision Kubernetes NFS clients on a Raspberry Pi homelab | Opensource.com
I wondered whether some remains of the first rbac.yaml file are causing problems, but didn´t have time to look at this in detail.
I also asked my staff which provisioner our production Kubernetes cluster is using; I thought it was based on generic NFS functionality and that I might just copy the setup. However, it is using the trident provisioner that seems specific to the ONTAP of our NetApp all-flash system which I would rather not use for this prototype.
So I will probably follow your advice and disable persistent storage. Ultimately, I want to use a cluster file system, most probably Ceph.
Thanks for the help!
Simon (manics), thanks again! I now disabled persistent storage and could start actually using the JupyterHub.
Inititially I was then disappointed that “import numpy as np” threw an error. However, after switching from the default notebook to the datascience notebook, the installation starts looking useful (as a demonstration platform).