I am working of 1.0.0-beta.1 version of zero-to-jupyterhub-k8s and have enable_user_namespace enabled. New namespace for the user gets created, however notebook pod of that user errors out with:
Error connecting to http://hub.jupyter-beta-version:8081/hub/api: HTTPConnectionPool(host=‘hub.jupyter-beta-version’, port=8081): Max retries exceeded with url: /hub/api/oauth2/token (Caused by NewConnectionError(’<urllib3.connection.HTTPConnection object at 0x7ffff2ff6760>: Failed to establish a new connection: [Errno 110] Connection timed out’))
Where “jupyter-beta-version” is the namespace of the main jupyter deployment. Has anyone else ran into this issue?
Thank you,
Alisa
I assume this is because of network policies not being configured to allow users to communicate across namespaces by default. I don’t clearly remember the config options we implemented to control the network policies… They are documented here: Configuration Reference — Zero to JupyterHub with Kubernetes documentation
I think what you want is the following in your config, does this work?
hub:
networkPolicy:
interNamespaceAccessLabels: accept
Thank you for your help. I modified the interNamespaceAccessLabels: true
; however was still seeing the issue, but that gave me an idea to further dig through networkPolicy hub. I modified the label for the ingress to component: singleuser-server
instead of component: hub
, and that did the trick.
ingress:
- from:
- podSelector:
matchLabels:
hub.jupyter.org/network-access-hub: "true"
ports:
- port: http
protocol: TCP
podSelector:
matchLabels:
app: jupyterhub
component: singleuser-server
release: jhub
Thank you again,
Alisa
1 Like
Thanks for outlining this Alisa!
I wonder if doing the following would have been enough. The hub tries to contact the singleuser servers when they start up, but the singleuser servers also tries to contact the hub, so both need to accept incoming traffic from the other in another namespace. Perhaps this would have done the trick?
# this is to allow the hub pod to accept traffic from pods with
# a hub.jupyter.org/network-access-hub: "true" label in another
# namespace. So, for singleuser-server ---> hub traffic.
hub:
networkPolicy:
interNamespaceAccessLabels: accept
# This is to allow the singleuser-server pod to accept traffic from pods with
# a hub.jupyter.org/network-access-singleuser-server: "true" label in another
# namespace. So, for hub ---> singleuser-server traffic.
singleuser:
networkPolicy:
interNamespaceAccessLabels: accept
If you have time to test if this works, it would be great feedback!
|
|
After changing back networkPolicy ‘hub’ to component: hub’ and setting all three components (singleuser, proxy and hub) to interNamespaceAccessLabels: accept, I saw timeout issue again. |
|
Interesting observation, deleting hub networkPolicy and leaving the other two: proxy and singleuser, also resolve the timeout issue; therefore the issue is definitely coming from the networkPolicy hub.
Situation summary
I’m not sure why this errors, but i have some debugging ideas. If you have time to investigate this, it is a very useful contribution for other users and the development of the Helm chart itself to have some insights about this.
Debugging ideas
All of the ideas assume the hub netpol is enabled, and hub.networkPolicy.interNamespaceAccessLabels=accept
.
Consider the k8s clusters software and known bugs
What is the thing that enforces the network policy resources rules? It can be Calico for example, or a lot of other tools. Some of them are not fully functional, and I wonder if it perhaps is so that you are using one that have some issues.
Can you describe what you know about…
- What cloud provider is used?
- Any specific tools used to setup k8s or was it setup managed by the cloud provider?
- Kubernetes version?
- What Network Policy controller is used?
- What k8s clusters Container Network Interface (CNI) is used?
Named port instead of port number issue?
Perhaps trying kubectl edit netpol hub
and modify port: http
to port: 8081
as seen below.
- ports:
- port: http
from:
# source 1 - labeled pods
- podSelector:
matchLabels:
hub.jupyter.org/network-access-hub: "true"
{{- if eq .Values.hub.networkPolicy.interNamespaceAccessLabels "accept" }}
namespaceSelector:
matchLabels: {} # without this, the podSelector would only consider pods in the local namespace
# source 2 - pods in labeled namespaces
- namespaceSelector:
matchLabels:
hub.jupyter.org/network-access-hub: "true"
{{- end }}
Does the jupyter pod have the correct label?
Does the user pod have the the label hub.jupyter.org/network-access-hub: "true"
?
What about a namespace label?
Can you manually try adding a label to the namespace, hub.jupyter.org/network-access-hub: "true"
, using kubectl edit namespace <namespace-name>
?