Hi, I’m running the CHP separate(in another pod, k8s) from the hub. At initialization everything is OK,
but after a while CHP updates its config with wrong target info.
api response right after pod initialization, when everything is OK:
call:
What does your JupyterHub pod log show for that moment in time? What does the hub believe its IP/hostname is? I think the hub will repair routes in the proxy if it thinks that they aren’t pointing to the right place. It does so every once in a while which could explain why it works at first and then happens about 10min after startup.
I think the c.JupyterHub.ip field has to be an IP and for a kubernetes setup should be '0.0.0.0' instead of a hostname.
One thing to check is to log in (with kubectl exec) to the hub pod and see what jupyterhub resolves to. My guess is that it gives you 127.0.0.1
As a remark: we have z2jh.jupyter.org/ which is a whole project about setting up and maintaining a production JupyterHub on kubernetes. I think it would be a good idea to use that if you plan on running a hub for a longer time. Building something new by hand is a great way to learn how it all works though
Hi,
At first c.JupyterHub.ip, c.JupyterHub.hub_connect_ip and c.JupyterHub.hub_connect_url was not set but the behavior of the system was same. I tried a lot of combinations of that configs, I will try to 0.0.0.0 again, to be sure.
To be precise, we have OpenShift and jupyterhub hostname handled by a service that pointing to the Hub pod. When I ping the hostname (both from Hub and Proxy pods), it resolved as the service’s IP as expected.
I’m well aware of the site, excellent source. I check out very often but we are not allowed to use helm with OpenShift -Corporate IT rules are whole another story
Have you checked how we configure the CHP in the Z2JH chart (with an IP or a service)? I don’t really have a good idea left beyond comparing how things are setup in Z2JH and your setup. especially if jupyterhub resolves to the IP of the service in both pods then I don’t know how the hub gets the idea to set it to 127.0.0.1.
Sorry to hear that Have you tried the helm option to generate YAML (called something like “local render” or “render only”) and then kubectl apply -f that? Maybe even just temporarily to see what it does.