I have an on-prem bare metal 4 node network and using MetalLB for the load balancer
metallb.yaml
file
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
Here is the config.yaml
file I used with the jhub
proxy:
secretToken: "05916d6e0eb9b91a594a209c84bc2b86e86af742a995691e957711ac4c339412"
spec:
ports:
- "name": "http"
"port": 80
"protocol": "TCP"
"targetPort": 80
selector:
"app": "jupyterhub"
"type": "LoadBalancer"
For some reason, the jhub is pod is stuck in pending mode
$ kubectl --namespace=jhub get pod
NAME READY STATUS RESTARTS AGE
hub-678f7748f9-ncxnz 0/1 Pending 0 4m8s
proxy-5d78c5cc58-7g8h2 1/1 Running 0 4m8s
$ kubectl get service --namespace jhub
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.107.96.150 <none> 8081/TCP 13m
proxy-api ClusterIP 10.99.231.34 <none> 8001/TCP 13m
proxy-public LoadBalancer 10.106.84.199 192.168.1.240 80:31185/TCP,443:30004/TCP 13m
manics
April 20, 2019, 7:23am
2
Do you have dynamic persistent volumes on your cluster? If not you might need to
change the hub storage configuration: https://zero-to-jupyterhub.readthedocs.io/en/latest/reference.html?highlight=sqlite-memory#hub-db-type
kubectl describe pod
should tell you if this is the problem.
You should also change your secretToken
since you’ve included it in your pasted config which is a security risk.
1 Like
@manics
Thanks for replying.
My network is made up of 4 PCs (1 master and 3 minions).
I’ll run the command and read the info you recommended
PS… That’s a token from a previous attempt
@manics
Events:
Type Reason Age From Message
Warning FailedScheduling 2m2s (x79 over 47m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 4 times)
Name: hub-678f7748f9-vxngc
Namespace: jhub
Priority: 0
PriorityClassName:
Node:
Labels: app=jupyterhub
component=hub
pod-template-hash=678f7748f9
release=jhub
Annotations: checksum/config-map:
Status: Pending
IP:
Controlled By: ReplicaSet/hub-678f7748f9
Containers:
hub:
Image: jupyterhub/k8s-hub:0.8.0
Port: 8081/TCP
Host Port: 0/TCP
Command:
jupyterhub
–config
/srv/jupyterhub_config.py
–upgrade-db
Requests:
cpu: 200m
memory: 512Mi
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: jhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key ‘proxy.token’ in secret ‘hub-secret’> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/etc/jupyterhub/secret/ from secret (rw)
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-j597n (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub-secret
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-j597n:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-j597n
Optional: false
QoS Class: Burstable
Node-Selectors:
Events:
Type Reason Age From Message
Warning FailedScheduling 2m2s (x79 over 47m) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 4 times)