Enabling JH to use an additional CA cert when calling private K8S API

I am deploying JupyterHub (v 0.9.1) to an on-premise K8s instance. The K8s instance has a private CA bundle. I have created our own JH image that simply copies in the private CA bundle and runs “update-ca-certificates” via Dockerfile. When I kubectl exec -it into the deployed JupyterHub pod, using the custom image, I can successfully curl the K8s API. However, when JupyterHub attempts to access the K8s API, it throws an error from Tornado web.py, as so:
SSLError("bad handshake: Error...SSL routine, 'tls_process_server_certificate', 'certificate verify failed')

This leads me to believe that Tornado is not using the system’s CA chain when making outbound HTTPS calls. If true, how do I configure JupyterHub (via ‘extraConfig’?) to enforce the addition of the custom CA bundle into all of its outbound calls to the K8s API?

Update: I discovered that the Requests module, used by JupyterHub, has an alternate method for finding its CA bundle, and I’ve updated a ENV var in my custom image called REQUESTS_CA_BUNDLE and set that appropriately. Now from the Python3 terminal inside the container, I can use requests.open and hit the K8S API properly. However, I still see the same error from the JupyterHub logs…what additional Python module could be interfering with the bundle being used for making HTTPS requests?

Thank you for your assistance.

After updating to K8S-Hub 0.10.2, it appears that Python 3.8.2 is now being used, and I receive a more helpful error message. Specifically:

SSL: CERTIFICATE_VERIFY_FAILED : unable to get issuer certificate (_ssl.c:1123)

Again, I can still use the Requests module to call our K8S API, as well as from cURL. However, something in spawner.py starts the stack trace where perhaps something is using yet another CA certs file that fails to verify against our private K8S API.

Is there a way to just disable the SSL verification so we can move past this issue?