HTTPS with Let's Encrypt on JupyterHub in Kubernetes

I’m trying to set up https on JupyterHub using these instructions

The first part of my config file looks something like this:

proxy:
  https:
    hosts:
      - jupyter.domain.org
    letsencrypt:
      contactEmail: 'email@gmail.com'
  secretToken: "thesecrettoken"

When I try to access https://jupyter.domain.org, I get an Unable to Connect message. Before setting up https, jupyter.domain.org was accessible. Now, it isn’t as it redirects to https://jupyter.domain.org. Accessing it via the IP address works though, but the connection isn’t encrypted.

kubectl describe pod on the autohttps pod gives


  Normal  Scheduled  55m   default-scheduler  Successfully assigned jhub/autohttps-7b4fb9dd6b-7p9nw to node6
  Normal  Pulled     55m   kubelet, node6 Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0" already present on machine
  Normal  Created    55m   kubelet, node6 Created container nginx
  Normal  Started    55m   kubelet, node6 Started container nginx
  Normal  Pulled     55m   kubelet, node6 Container image "jetstack/kube-lego:0.1.7" already present on machine
  Normal  Created    55m   kubelet, node6 Created container kube-lego
  Normal  Started    55m   kubelet, node6 Started container kube-lego

Thanks for any help!

Related issue here :slightly_smiling_face:
If it’s not working for JupyterHub either, there could be a bug? @betatim?

@celine168 could you post the output of kubectl logs <thepodyoudescribed>?

If you type your jupyter.domain.org into https://crt.sh/ does it show you that there are certificates?

I didn’t think that something has changed in the zero to jupyterhub helm chart so the instructions should continue to work. cc @consideRatio who knows more about z2jh chart’s state

It does show that there is a certificate when I type my domain into the website.

It seems that there are two containers in the pod named autohttps-7b4fb9dd6b-h75pv, both of which are ready.

kubectl logs autohttps-7b4fb9dd6b-h75pv -n jhub -c nginx yields:

W0708 23:15:45.403955       6 controller.go:1026] unexpected error validating SSL certificate jhub/kubelego-tls-proxy-jhub for host www.jupyter.domain.org. Reason: x509: certificate is valid for jupyter.domain.org, not www.jupyter.domain.org
W0708 23:15:45.403996       6 controller.go:1027] Validating certificate against DNS names. This will be deprecated in a future version.
W0708 23:15:45.404019       6 controller.go:1032] ssl certificate jhub/kubelego-tls-proxy-jhub does not contain a Common Name or Subject Alternative Name for host www.jupyter.domain.org. Reason: x509: certificate is valid for jupyter.domain.org, not www.jupyter.domain.org
I0708 23:15:45.404083       6 controller.go:168] backend reload required
I0708 23:15:45.530176       6 controller.go:177] ingress backend successfully reloaded...
I0708 23:15:45.587496       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"jhub", Name:"kube-lego-nginx", UID:"c4e7d01e-a137-11e9-ad13-00259051cf1c", APIVersion:"extensions", ResourceVersion:"6190859", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress jhub/kube-lego-nginx
W0708 23:15:48.737407       6 controller.go:1026] unexpected error validating SSL certificate jhub/kubelego-tls-proxy-jhub for host www.jupyter.domain.org. Reason: x509: certificate is valid for jupyter.domain.org, not www.jupyter.domain.org
W0708 23:15:48.737459       6 controller.go:1027] Validating certificate against DNS names. This will be deprecated in a future version.
W0708 23:15:48.737483       6 controller.go:1032] ssl certificate jhub/kubelego-tls-proxy-jhub does not contain a Common Name or Subject Alternative Name for host www.jupyter.domain.org. Reason: x509: certificate is valid for jupyter.domain.org, not www.jupyter.domain.org
I0708 23:15:48.737537       6 controller.go:168] backend reload required
I0708 23:15:48.905680       6 controller.go:177] ingress backend successfully reloaded...
I0708 23:16:32.350292       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"jhub", Name:"jupyterhub-internal", UID:"c22712e1-a1cc-11e9-ad13-00259051cf1c", APIVersion:"extensions", ResourceVersion:"6191437", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress jhub/jupyterhub-internal
I0708 23:16:32.362016       6 controller.go:168] backend reload required
I0708 23:16:32.742829       6 controller.go:177] ingress backend successfully reloaded...
I0708 23:21:41.906561       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"jhub", Name:"kube-lego-nginx", UID:"c4e7d01e-a137-11e9-ad13-00259051cf1c", APIVersion:"extensions", ResourceVersion:"6192429", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress jhub/kube-lego-nginx
I0708 23:21:41.906879       6 controller.go:168] backend reload required
I0708 23:21:42.297787       6 controller.go:177] ingress backend successfully reloaded...
.......
W0709 17:51:19.412270       6 reflector.go:341] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:140: watch of *v1.Endpoints ended with: too old resource version: 6393920 (6394923)
W0709 18:05:00.419281       6 reflector.go:341] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:140: watch of *v1.Endpoints ended with: too old resource version: 6396983 (6397441)
W0709 18:20:01.431561       6 reflector.go:341] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:140: watch of *v1.Endpoints ended with: too old resource version: 6399504 (6400210)
W0709 18:31:25.441979       6 reflector.go:341] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:140: watch of *v1.Endpoints ended with: too old resource version: 6402273 (6402312)
W0709 18:49:37.454779       6 reflector.go:341] k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:140: watch of *v1.Endpoints ended with: too old resource version: 6404380 (6405675)

There is more before this, not sure if it’s useful.

kubectl logs autohttps-7b4fb9dd6b-h75pv -n jhub -c kube-lego gives:

time="2019-07-08T22:07:53Z" level=info msg="connecting to kubernetes api: https://10.96.0.1:443" context=kubelego
time="2019-07-08T22:07:53Z" level=info msg="successfully connected to kubernetes api v1.14.3" context=kubelego
time="2019-07-08T22:07:53Z" level=info msg="server listening on http://:8080/" context=acme
time="2019-07-08T22:07:53Z" level=info msg="Queued item \"jhub/jupyterhub-internal\" to be processed immediately" context=kubelego
time="2019-07-08T22:07:53Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2019-07-08T22:07:53Z" level=info msg="cert expires in 89.2 days, no renewal needed" context=ingress_tls expire_time="2019-10-06 03:21:11 +0000 UTC" name=jupyterhub-internal namespace=jhub
time="2019-07-08T22:07:53Z" level=info msg="no cert request needed" context=ingress_tls name=jupyterhub-internal namespace=jhub
time="2019-07-08T23:15:45Z" level=info msg="Detected spec change - queued ingress \"jhub/jupyterhub-internal\" to be processed" context=kubelego
time="2019-07-08T23:15:45Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2019-07-08T23:15:45Z" level=info msg="cert does not cover all domains" context=ingress_tls domains="[www.jupyter.domain.org]" name=jupyterhub-internal namespace=jhub
time="2019-07-08T23:15:45Z" level=info msg="requesting certificate for www.jupyter.domain.org" context=ingress_tls name=jupyterhub-internal namespace=jhub
time="2019-07-08T23:16:32Z" level=info msg="Detected spec change - queued ingress \"jhub/jupyterhub-internal\" to be processed" context=kubelego
time="2019-07-08T23:21:41Z" level=warning msg="authorization failed after 5m0s: reachability test failed: Get http://www.jupyter.domain.org/.well-known/acme-challenge/_selftest: dial tcp: lookup www.jupyter.domain.org on 10.96.0.10:53: no such host" context=acme domain=www.jupyter.domain.org
time="2019-07-08T23:21:41Z" level=error msg="Error while processing certificate requests: no domain could be authorized successfully" context=kubelego
time="2019-07-08T23:21:41Z" level=error msg="worker: error processing item, requeuing after rate limit: no domain could be authorized successfully" context=kubelego
time="2019-07-08T23:21:41Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2019-07-08T23:21:41Z" level=info msg="cert expires in 89.2 days, no renewal needed" context=ingress_tls expire_time="2019-10-06 03:21:11 +0000 UTC" name=jupyterhub-internal namespace=jhub
time="2019-07-08T23:21:41Z" level=info msg="no cert request needed" context=ingress_tls name=jupyterhub-internal namespace=jhub
time="2019-07-08T23:31:41Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2019-07-08T23:31:41Z" level=info msg="cert expires in 89.2 days, no renewal needed" context=ingress_tls expire_time="2019-10-06 03:21:11 +0000 UTC" name=jupyterhub-internal namespace=jhub
time="2019-07-08T23:31:41Z" level=info msg="no cert request needed" context=ingress_tls name=jupyterhub-internal namespace=jhub
time="2019-07-09T06:07:53Z" level=info msg="Periodically check certificates at 2019-07-09 06:07:53.421953244 +0000 UTC m=+28800.108424074" context=kubelego
time="2019-07-09T06:17:53Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2019-07-09T06:17:53Z" level=info msg="cert expires in 88.9 days, no renewal needed" context=ingress_tls expire_time="2019-10-06 03:21:11 +0000 UTC" name=jupyterhub-internal namespace=jhub
time="2019-07-09T06:17:53Z" level=info msg="no cert request needed" context=ingress_tls name=jupyterhub-internal namespace=jhub
time="2019-07-09T06:17:53Z" level=info msg="ignoring as has no annotation 'hub.jupyter.org/tls-terminator'" context=ingress name=kube-lego-nginx namespace=jhub
time="2019-07-09T14:07:53Z" level=info msg="Periodically check certificates at 2019-07-09 14:07:53.421956771 +0000 UTC m=+57600.108427601" context=kubelego
time="2019-07-09T14:17:53Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2019-07-09T14:17:53Z" level=info msg="cert expires in 88.5 days, no renewal needed" context=ingress_tls expire_time="2019-10-06 03:21:11 +0000 UTC" name=jupyterhub-internal namespace=jhub
time="2019-07-09T14:17:53Z" level=info msg="no cert request needed" context=ingress_tls name=jupyterhub-internal namespace=jhub
time="2019-07-09T14:17:53Z" level=info msg="ignoring as has no annotation 'hub.jupyter.org/tls-terminator'" context=ingress name=kube-lego-nginx namespace=jhub

I did try changing the variable under hosts in config.yaml to www.jupyter.domain.org as a test, which might be why some of the logs mention www?

@betatim
We’ve tried running helm upgrade again and messed around with the pods a little. Hopefully these updated logs might tell something?

$ kubectl logs autohttps-7b4fb9dd6b-r5l7v -n jhub -c nginx
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:    0.15.0
  Build:      git-df61bd7
  Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------

I0716 18:59:05.785896       6 flags.go:162] Watching for ingress class: jupyterhub-proxy-tls
W0716 18:59:05.785943       6 flags.go:165] only Ingress with class "jupyterhub-proxy-tls" w                            ill be processed by this ingress controller
W0716 18:59:05.786391       6 client_config.go:533] Neither --kubeconfig nor --master was sp                            ecified.  Using the inClusterConfig.  This might not work.
I0716 18:59:05.786558       6 main.go:158] Creating API client for https://10.96.0.1:443
I0716 18:59:05.799659       6 main.go:202] Running in Kubernetes Cluster version v1.14 (v1.1                            4.3) - git (clean) commit 5e53fd6bc17c0dec8434817e69b04a25d8ae0ff0 - platform linux/amd64
I0716 18:59:05.802203       6 main.go:84] validated jhub/proxy-http as the default backend
I0716 18:59:06.285579       6 stat_collector.go:77] starting new nginx stats collector for I                            ngress controller running in namespace jhub (class jupyterhub-proxy-tls)
I0716 18:59:06.285608       6 stat_collector.go:78] collector extracting information from po                            rt 18080
I0716 18:59:06.307858       6 nginx.go:278] starting Ingress controller
I0716 18:59:06.314907       6 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Names                            pace:"jhub", Name:"nginx-proxy-config", UID:"c52dbd97-a7fb-11e9-ad13-00259051cf1c", APIVersi                            on:"v1", ResourceVersion:"8303884", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMa                            p jhub/nginx-proxy-config
I0716 18:59:07.411913       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespa                            ce:"jhub", Name:"jupyterhub-internal", UID:"c5ee3050-a7fb-11e9-ad13-00259051cf1c", APIVersio                            n:"extensions", ResourceVersion:"8303966", FieldPath:""}): type: 'Normal' reason: 'CREATE' I                            ngress jhub/jupyterhub-internal
I0716 18:59:07.414347       6 backend_ssl.go:69] adding secret jhub/kubelego-tls-proxy-jhub                             to the local store
I0716 18:59:07.414614       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespa                            ce:"jhub", Name:"kube-lego-nginx", UID:"f9b64488-a6a3-11e9-ad13-00259051cf1c", APIVersion:"e                            xtensions", ResourceVersion:"7836097", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingre                            ss jhub/kube-lego-nginx
I0716 18:59:07.508488       6 nginx.go:299] starting NGINX process...
I0716 18:59:07.508766       6 leaderelection.go:175] attempting to acquire leader lease  jhu                            b/ingress-controller-leader-jupyterhub-proxy-tls...
W0716 18:59:07.509464       6 controller.go:773] service jhub/kube-lego-nginx does not have                             any active endpoints
I0716 18:59:07.509704       6 controller.go:168] backend reload required
I0716 18:59:07.509775       6 stat_collector.go:34] changing prometheus collector from  to d                            efault
I0716 18:59:07.525458       6 status.go:196] new leader elected: autohttps-69b66cb569-wd2hs
I0716 18:59:07.687617       6 controller.go:177] ingress backend successfully reloaded...
W0716 18:59:10.842938       6 controller.go:773] service jhub/kube-lego-nginx does not have                             any active endpoints
I0716 18:59:12.862142       6 backend_ssl.go:181] updating local copy of ssl certificate jhu                            b/kubelego-tls-proxy-jhub with missing intermediate CA certs
W0716 18:59:14.176266       6 controller.go:773] service jhub/kube-lego-nginx does not have                             any active endpoints
I0716 18:59:14.176449       6 controller.go:168] backend reload required
I0716 18:59:14.375069       6 controller.go:177] ingress backend successfully reloaded...
W0716 18:59:17.509601       6 controller.go:773] service jhub/kube-lego-nginx does not have                             any active endpoints
I0716 18:59:20.843132       6 controller.go:168] backend reload required
I0716 18:59:21.048723       6 controller.go:177] ingress backend successfully reloaded...
I0716 18:59:38.162553       6 leaderelection.go:184] successfully acquired lease jhub/ingres                            s-controller-leader-jupyterhub-proxy-tls
I0716 18:59:38.162630       6 status.go:196] new leader elected: autohttps-7b4fb9dd6b-r5l7v
I0716 19:00:38.190951       6 status.go:361] updating Ingress jhub/jupyterhub-internal statu                            s to [{ }]
I0716 19:00:38.194791       6 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespa                            ce:"jhub", Name:"jupyterhub-internal", UID:"c5ee3050-a7fb-11e9-ad13-00259051cf1c", APIVersio                            n:"extensions", ResourceVersion:"8304326", FieldPath:""}): type: 'Normal' reason: 'UPDATE' I                            ngress jhub/jupyterhub-internal

From the other container,

$ kubectl logs autohttps-7b4fb9dd6b-r5l7v -n jhub -c kube-lego
time="2019-07-16T18:59:07Z" level=info msg="kube-lego 0.1.6-61705680 starting" context=kubelego
time="2019-07-16T18:59:07Z" level=info msg="connecting to kubernetes api: https://10.96.0.1:443" context=kubelego
time="2019-07-16T18:59:07Z" level=info msg="successfully connected to kubernetes api v1.14.3" context=kubelego
time="2019-07-16T18:59:07Z" level=info msg="server listening on http://:8080/" context=acme
time="2019-07-16T18:59:07Z" level=info msg="Queued item \"jhub/jupyterhub-internal\" to be processed immediately" context=kubelego
time="2019-07-16T18:59:07Z" level=info msg="process certificate requests for ingresses" context=kubelego
time="2019-07-16T18:59:07Z" level=info msg="cert expires in 88.2 days, no renewal needed" context=ingress_tls expire_time="2019-10-13 00:58:19 +0000 UTC" name=jupyterhub-internal namespace=jhub
time="2019-07-16T18:59:07Z" level=info msg="no cert request needed" context=ingress_tls name=jupyterhub-internal namespace=jhub

In our nginx.conf file, we forward requests to the public IP address to the external IP of proxy-public service. (The external IPs are part of an address pool for MetalLB, a bare-metal load balancer. They are under a 10.0.1.X subnet.) Maybe the IP address cannot access the SSL certificate, if it’s located inside the cluster? Thank you for any help!

We fixed our problem, so I’ll document it here in case if anyone else happened to run into the same issue.

We initially configured our nginx on our management node so that traffic would redirect to the EXTERNAL-IP's of the JupyterHub service in the cluster within an http block. We theorize that this would decrypt traffic, then send it to the cluster, which doesn’t make sense since there is an nginx pod that serves this purpose.

We instead used a stream block to redirect the requests.