I’m following the zero to hero kubernetes guide and I’ve run into an issue trying to get named servers to work. While the hub allows me to spawn multiple named servers, it appears that only the last one started actually responds. All the other servers respond with a 503. The logs don’t appear to be telling me much so I’m not sure where else to turn.
I’m using k3s as my kubernetes cluster on Ubuntu 20.04 LTS.
I’m also only slightly modifying the official 1.2.0 helm chart:
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/ helm repo update helm show values jupyterhub/jupyterhub > values.yaml
I’ve modified the
values.yaml in the following ways (omitted most of it for brevity)
hub: allowNamedServers: true namedServerLimitPerUser: 10 ## ... cull: enabled: false ## ... debug: enabled: true global: safeToShowValues: true
I installed jupyterhub with the following command:
helm upgrade \ --cleanup-on-fail \ --install jupyterhub jupyterhub/jupyterhub \ --namespace jupyter \ --create-namespace \ --version=1.2.0 \ --values=values.yaml
All the pods come up as expected:
$ kubectl get pod -n jupyter proxy-7478f74f4-b64zw 1/1 Running 0 17s continuous-image-puller-4lbtq 1/1 Running 0 17s user-scheduler-6795c686f5-8xh7f 1/1 Running 0 17s user-scheduler-6795c686f5-psc4p 1/1 Running 0 17s hub-7865b575cf-xrtg7 1/1 Running 0 17s
For the most part, everything works until I create a second named server for the same user. I login to the UI using a dummy name/password. This automatically creates a server for my user as expected.
If I go to the hub control panel and create a named server
test1. This server comes up successfully, and I can access it like normal. However my singleuser server stops working. The only thing I’m seeing is a 503 page:
If I stop my named server, then my singleuser server starts working again. As soon as I turn on my named server again my singleuser server fails with the same error.
This error also happens whenever I start two named servers. Whichever is the last one I started works and the others respond with 503s. The only real log I see out of the hub is:
[D 2022-09-06 00:47:13.946 JupyterHub pages:652] No template for 503 [I 2022-09-06 00:47:13.954 JupyterHub log:189] 200 GET /hub/error/503?url=%2Fuser%2Ftest_user (@10.42.0.158) 9.17ms
The proxy logs are complaining about a connection refused:
00:51:02.611 [ConfigProxy] error: 503 GET /user/test_user connect ECONNREFUSED 10.42.0.164:8888
And the server that is complaining doesn’t have any additional logs, presumably because the connection is not making it to it. But all the pods are still running:
$ kubectl get pod -n jupyter proxy-7478f74f4-b64zw 1/1 Running 0 20m continuous-image-puller-4lbtq 1/1 Running 0 20m user-scheduler-6795c686f5-8xh7f 1/1 Running 0 20m user-scheduler-6795c686f5-psc4p 1/1 Running 0 20m hub-7865b575cf-xrtg7 1/1 Running 0 20m jupyter-test-5fuser--test1 1/1 Running 0 8m54s jupyter-test-5fuser 1/1 Running 0 8m11s
If I exec into one of the notebook pods, the jupyter process is still running and appears to be listening on the correct port.
I’m at a bit of a loss as to how to continue troubleshooting. Any help would be greatly appreciated.
For some reason I can’t attach the actual log files, so please let me know if there’s anything additional you’d like to see.