I’m working on a deployment of JupyterHub (5.4.3) and trying to get internal_ssl to work for the communication between the singleuser process and the Hub. The Hub is sitting on a webserver served by nginx, and we are using a custom spawner package to spawn the single-user server onto the cluster from a remote Slurm submitter node.
With internal_ssl disabled, the spawning works, but when I enable internal_ssl the singleuser server is unable to connect back to the Hub API to complete the handshake. In the logs of the singleuser process I get a string of the following exception:
[W 2026-01-16 16:56:53.173 ServerApp] SSL Error on 6 ('{HUB_IP}', 443): [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:992)
[E 2026-01-16 16:56:53.173 JupyterHubSingleUser] Failed to connect to my Hub at https://{HUB_URL}/hub/api (attempt 3/5). Is it running?
Traceback (most recent call last):
File "/opt/jupyterhub/lib/python3.11/site-packages/jupyterhub/singleuser/extension.py", line 353, in check_hub_version
resp = await client.fetch(self.hub_auth.api_url)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/jupyterhub/lib/python3.11/site-packages/tornado/simple_httpclient.py", line 338, in run
stream = await self.tcp_client.connect(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/jupyterhub/lib/python3.11/site-packages/tornado/tcpclient.py", line 292, in connect
stream = await stream.start_tls(
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/jupyterhub/lib/python3.11/site-packages/tornado/iostream.py", line 1363, in _do_ssl_handshake
self.socket.do_handshake()
File "/usr/lib/python3.11/ssl.py", line 1379, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:992)
I think what’s happening here is that the notebook process is seeing the standard website certificate used by nginx to serve the webpage when it connects to the Hub’s API endpoint, but because the webserver’s CA isn’t part of the notebooks-ca_trust.crt generated by the internal SSL, it fails to be recognised as valid. If I manually append the webserver’s CA bundle to notebooks-ca_trust.crt then everything works as expected.
I can’t figure out if there is a way to get this to be recognised in JupyterHub’s config and automatically propagated through when the internal CA files are registered (I have tried e.g. defining it in c.JupyterHub.external_ssl_authorities; when I do this it gets registered in certipy.json but not passed through to the notebook trust bundle). Is there a ‘proper’ way of handling this, or something else I am missing in the config?
Are you serving the Hub API server from nginx as well?
Normally in the SLURM setting, you need to add all the compute node names or an internal domain where all the compute nodes are reachable in c.JupyterHub.trusted_alt_names names to include them in certificate bundle.
Are you serving the Hub API server from nginx as well?
nginx is serving everything on 443 on the Hub server, I don’t have any special handling set up for the API endpoint. The site config on nginx’s side is as follows:
For all of the internal services JupyterHub is running configurable-http-proxy as default, and I haven’t changed any of the default API-related configurables.
Normally in the SLURM setting, you need to add all the compute node names or an internal domain where all the compute nodes are reachable in c.JupyterHub.trusted_alt_names names to include them in certificate bundle.
I’ve already set the JupyterHub.trusted_alt_names to include the DNS names for the Hub itself and the worker nodes that the user processes will run on, these were recognised fine in the test I made in the original post. The only issue as far as I can tell is that the singleuser server isn’t able to validate the CA of the website’s certificate, due to it exclusively checking the CAs from notebooks-ca_trust.crt and not also using the system’s CA store.
If I manually append the webserver’s CA bundle to notebooks-ca_trust.crt then everything works as expected.
This should not be necessary in normal deployment. If you look at the stacktrace, single user server is making an API request using a HTTP client to the hub. When you turn on internal TLS, Hub will be running with TLS using the certificates created by hub. At the same time, hub will create a single user server specific certificates using the same CA and single user server will use those certificates to connect itself to hub API. These certificates are created on the server where JupyterHub is running. How are you moving these certificates onto the compute nodes where single user servers are running?
Thanks for the explanations - this much I understand, and as far as I can tell this is all working as it should for all the internal components. But what I think is happening on top of that is that when the singleuser server connects to https://jupyterhub.domain/hub/api, nginx on jupyterhub.domain is sending back its own certificate rather than the internal one.
So I suppose what’s needed here is either that the singleuser server can read nginx’s certificate and communicate that way, or that there’s some kind of passthrough or alternative endpoint that the singleuser server connects to so that it directly connects to the internal service without talking to the reverse proxy - this is the part that I’m unsure about.
How are you moving these certificates onto the compute nodes where single user servers are running?
The certificates are copied across to the user’s directory on the network storage via SCP (through the asyncssh library), with the correct permission bits so that they’re readable by the user process. This part’s all working as it should.
What I would suggest is to use an internal IP address for c.JupyterHub.hub_ip instead of a public domain name. And add that IP address in the SAN of JupyterHub internal TLS. This way nginx will never come into the communication between single user servers and hub. So, your hub will be running on something like https://10.0.0.1:8081 and single user servers will connect to this URL using self signed certificates. Does it make sense?
What I would suggest is to use an internal IP address for c.JupyterHub.hub_ip instead of a public domain name. And add that IP address in the SAN of JupyterHub internal TLS.
Great, this eventually worked on my end (“eventually” because in the process of setting that up I apparently also messed up something with the alt name setting for the worker nodes, causing the requests from Hub to singleuser to apparently time out, while actually they were quietly 403ing without any errors showing up in the logs on either end… Was a weird one to debug but seems to be working now.)