Issues with Off-loading SSL to a Load Balancer for JupyterHub with External DNS on EKS

I am deploying a Jupyterhub service on AWS EKS via Bitnami’s Jupyterhub Helm Chart. My cluster has external DNS enabled (using the aws-ia/terraform-aws-eks-blueprints add on) configured to use AWS Route 53.

I am trying to deploying the chart with an HTTPS endpoint with the following manifest:

---
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: jupyterhub
  namespace: jupyterhub
spec:
  interval: 2m
  install:
    remediation:
      retries: -1
  values:
    hub:
      password: XXXXXXX
    proxy:
      https:
        enabled: true
        type: offload
      service:
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: XXXXXXXX
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
          service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
          external-dns.alpha.kubernetes.io/hostname: jupyterhub-test.internal.my-company.com
    postgresql:
      enabled: true
    singleuser:
      image:
        registry: XXXXX
        repository: XXXXX
        tag: latest
        pullPolicy: Always
      command: [ "jupyterhub-singleuser" ]
  chart:
    spec:
      chart: jupyterhub
      version: "6.1.4"
      sourceRef:
        kind: HelmRepository
        name: bitnami
        namespace: bitnami
      interval: 1m

The proxy section is configured as stated in z2jh’s docs on off-loading-ssl-to-a-load-balancer. The load balancer ssl certification I am specifying is one that works for any DNS ending in internal.my-company.com. The cluster’s external-dns pod logs indicate no errors, neither do the load balancer controller pods, and the helm chart’s events show no errors either; same for the proxy pod and the proxy-public and proxy-api services. I am able to access the service via the EXTERNAL-IP provided for the jupyterhub-proxy-public service as http, but not as https. Nor does the hostname annotation I provided (jupyterhub-test.internal.my-company.com) work either, for neither http or https.

I know the external DNS works, as I’ve used it to successfully deploy an mlflow helm chart with the external-dns.alpha.kubernetes.io/hostname: mlflow-test.internal.my-company.com annotation, albeit via http. This works and I see a record for this hostname automatically added to my Route 53 console in AWS.

I do not see the external DNS add a record for jupyterhub-test.internal.my-company.com, so I know this is at least part of the problem, but I am not seeing any errors anywhere indicating why this might not be happening.

Would greatly appreciate any insight or tips on where I may be going astray or failing to look for clues. Thanks in advance!

Update: If I remove everything from the proxy section except for the external-dns.alpha.kubernetes.io/hostname annotation in proxy.service, I am noticing that the annotation is not being applied to the proxy-public service. So for some reason, the annotations are not being applied to the proxy service.

The Z2JH docs are for configuring and installing the official JupyterHub Helm chart, not the Bitnami chart.

That’s a good callout. Was naive of me to assume values mapping parity between the 2 charts. Have an issue filed with Bitnami for this; will take a crack at the official JupyterHub helm chart, thanks!

Can confirm that once I started using the official Jupyterhub helm chart from the Z2JH docs it all started working as expected.

2 Likes