JupyterHub proxy-public svc has no external IP (stuck in <pending>)

I am using helm to deploy JupyterHub (version 0.8.2) to kubernetes (AWS managed kubernetes “EKS”). I have a helm config to describe the proxy-public service, with an AWS elastic load balancer:

  secretToken: ""
    enabled: true
    type: offload
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ...
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '1801'

Problem: When I deploy JupyterHub to EKS via helm:

helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.8.2 --values config.yaml

The proxy-public svc never get’s an external IP. It is stuck in pending state:

> kubectl get svc
NAME           TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
hub            ClusterIP    <none>        8081/TCP                     15m
proxy-api      ClusterIP   <none>        8001/TCP                     15m
proxy-public   LoadBalancer    <pending>     80:31958/TCP,443:30470/TCP   15m

I did kubectl describe svc proxy-public and kubectl get events and there does not appear to be anything out of the ordinary. No errors.

The problem turned out to be the fact that I had mistakenly put the kubernetes cluster (and control plane) in private subnets only, thus making it impossible for the ELB to get an external IP.