I’ve checked this project with great interest more then a year ago. But it seemed at the time that the project wasn’t ready to use on Red Hat OpenShift. I’ve checked again today but it seems that this is still the case.
I’m wondering if someone was able to get Zero2JupyterHub working on OpenShift or that there are plans in the near future to make this possible?
In the meantime i’m trying to deploy myself but are running into security errors at the moment.
For example (I’m letting OpenShift decide on the containerSecurityContext settings) :
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"jupyter-tempj06\" is forbidden: unable to validate against any security context constraint: [provider \"anyuid\": Forbidden: not usable by user or serviceaccount, provider \"pipelines-scc\": Forbidden: not usable by user or serviceaccount, spec.initContainers[0].securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000990000, 1000999999], spec.initContainers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.initContainers[0].securityContext.capabilities.add: Invalid value: \"NET_ADMIN\": capability may not be added,
disabling autoscale funcions (were dealing with a solid set of nodes)
using the OpenShift Route funcionality in stead of a load balancer
Specific-config.yaml is adding an OpenShift Route (to the proxy service) and two PVC’s. This is an override for a custom created Helm template (do deploy along side of Z2JH)
I’m wondering if someone want’s to take a look at the settings and if there is anything out of order or that should be configured otherwise.
P.s. I’ve imported all images due air gapped environments. I’m only using k8s-hub, configurable-http-proxy and k8s.gcr.io/pause in our deployment.
Hey there, I literally signed up just to say THANK YOU! I have been stuck on deploying JupyterHub on OpenShift for days now. I am also very new to JupyterHub. I’m not sure if I am allowed to post this here but I will pay for a basic guide from start to finish if possible!
JupyterHub proxy:851] api_request to the proxy failed with status code 599, retrying…
[E 2022-02-02 19:54:29.748 JupyterHub app:2973]
Traceback (most recent call last):
File “/usr/local/lib/python3.8/dist-packages/jupyterhub/app.py”, line 2971, in launch_instance_async
await self.start()
File “/usr/local/lib/python3.8/dist-packages/jupyterhub/app.py”, line 2746, in start
await self.proxy.get_all_routes()
File “/usr/local/lib/python3.8/dist-packages/jupyterhub/proxy.py”, line 898, in get_all_routes
resp = await self.api_request(’’, client=client)
File “/usr/local/lib/python3.8/dist-packages/jupyterhub/proxy.py”, line 862, in api_request
result = await exponential_backoff(
File “/usr/local/lib/python3.8/dist-packages/jupyterhub/utils.py”, line 184, in exponential_backoff
raise TimeoutError(fail_message)
TimeoutError: Repeated api_request to proxy path “” failed.
I have figured out how to make this work without using the above network policy. OpenShift Local by default uses the weave network policy controller; which doesn’t support the usage of named ports inside network policies (as used within the helm chart). So By replacing the named ports with the actual port numbers I was able to resolve this issue.
A further issue I came accross is that the default-dns service for OpenShift local runs on port 5353 using the UDP protocol; instead of port 53.
I’ve added some default network policies to my original code (message from feb 22). This would also solve the issues @unknownsolo had. I’m able to deploy to OpenShift with these (simple) objects and Helm override.
Using custom scripting and kustomize patches seems a bit much @Will_Holtam this is probably only needed for your specific setup (OpenShift Local?)
I’m afraid that I can’t see any network policies in the code that you linked on feb 22nd.
The issue unknownsolo resolved on Feb '22 resolved his issue by just opened up all ingress networking within the namespace; rather than actually understanding what specifically was blocking access so that he could fix it, keep the NetworkPolicy rules tight.
The networking issue he was facing was due to the use of Named Ports inside the Network Policies. If he replace the named ports with the actual port numbers, it would work without opening up ingress from all pods in the namespace.
Unfortunately, as the allowed GID/UID range is generated by an OpenShift project/namespace when it’s created; you can’t specify the GID/UID in advance. This means that the jupyterhub_config.py must be changed or it can’t start containers via the kube_scheduler.
runAsUser: # let openshift set the value
runAsGroup: # let openshift set the value
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
You mention that you are only using k8s-hub, configurable-http-proxy and k8s.gcr.io/pause. This simplifies your deployment. It means that you don’t have to manage the networking for the singleuser pods which are dynamically spun up when a user logs in through the hub.
In-order to dynamically spin up singleuser pods when users login; you must amend the security context in jupyterhub_config.py. There is no easy way to do that via values.yaml. You also have to change the networking for the singleuser network policy so that it can have egress to the Kubernetes DNS service; otherwise it can’t resolve the ‘hub’ service. Note, you don’t have to do this, if you’re not dynamically spinning up any number of singleuser pods via the kube-scheduler as people log in and interact with your deployed JupyterHub.