Automatic HTTPS with LetsEncrypt in the context of Binder

Hi all,

I’m trying to get automatic HTTPS enabled for the Turing’s BinderHub. Here’s where I’m at:

  • I’m piggy-backing off the Turing’s domain name to handle redirection
  • I have two A records, one for the Binder page and one for the JupyterHub
  • All the redirections work fine through HTTP, including OAuth callback when authenticating with GitHub.

I’m following this example to enable auto-LetsEncrypt and my config file looks as follows:

jupyterhub:
  proxy:
    https:
      hosts:
        - binder.<my-domain-name>
        - hub.<my-domain-name>
      letsencrypt:
        contactEmail: <an-email-address-I-have-access-to>

Output from kubectl describe pod:

Events:
  Type     Reason     Age                From                                        Message
  ----     ------     ----               ----                                        -------
  Normal   Scheduled  80s                default-scheduler                           Successfully assigned hub23/autohttps-75578ff7c9-gmcw2 to aks-nodepool1-*
  Normal   Pulled     77s                kubelet, aks-nodepool1-*  Container image "jetstack/kube-lego:0.1.7" already present on machine
  Normal   Created    77s                kubelet, aks-nodepool1-*  Created container kube-lego
  Normal   Started    76s                kubelet, aks-nodepool1-*  Started container kube-lego
  Normal   Killing    39s                kubelet, aks-nodepool1-*  Container nginx failed liveness probe, will be restarted
  Warning  Unhealthy  29s (x2 over 39s)  kubelet, aks-nodepool1-*  Readiness probe failed: HTTP probe failed with statuscode: 500
  Normal   Pulled     28s (x2 over 78s)  kubelet, aks-nodepool1-*  Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0" already present on machine
  Normal   Created    27s (x2 over 78s)  kubelet, aks-nodepool1-*  Created container nginx
  Normal   Started    27s (x2 over 77s)  kubelet, aks-nodepool1-*  Started container nginx
  Warning  Unhealthy  9s (x4 over 59s)   kubelet, aks-nodepool1-*  Liveness probe failed: Get http://xx.xx.xx.xx:xxxxx/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy  8s (x3 over 58s)   kubelet, aks-nodepool1-*  Readiness probe failed: Get http://xx.xx.xx.xx:xxxxx/healthz: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

I’m not really sure what’s wrong here :woman_shrugging: Any help appreciated, thank you!

I don’t think you can use the Z2JH chart to get a certificate for all of the domains. Also we don’t have any docs for doing this … :frowning: which is a bit embarrassing.

A rough guide to how it works (in my head at least): we use nginx-ingress and kube-lego. First one to provide access to the Services in our cluster, the second to obtain let’s encrypt certificates for them.

Below a few snippets I know of that are relevant (and I’d copy to your chart to start with).

And it continues in

And further down in that file.

Unfortunately I don’t know a nice self-contained (not spread around various config files in a repo) example. I’d also checkout https://github.com/pangeo-data/pangeo-binder/blob/4e744c3451520e86e4ba722b36b4f527c95e1e8d/pangeo-binder/values.yaml from the Pangeo people.

There is also our config for the NeurIPS2018 deployment we did https://github.com/consideRatio/neurips.mybinder.org-deploy which is less complex (maybe) but uses cert-manager. So similar but different.

Thanks @betatim, maybe we can start to work towards some documentation through this :slight_smile:

So my first question is, how should I go about converting my config.yaml and secret.yaml into a values.yaml and prod.yaml? For config.yaml, it looks like I need to put the binderhub key at the top-level and that’s enough to change it into a values.yaml. Would I still be able to use the chart published here?

Also, if values.yaml and prod.yaml is the “correct” way to set up a production-ready BinderHub that a maintainer has full control over all aspects of, shouldn’t the setup docs reflect that?

1 Like

I’ll call the two different approaches “use the binderhub chart directly” and “create a new helm chart that depends on the binderhub chart”.

The first (“direct”) is what is described in the guide right now. Similar to the z2jh guide. The pros of it are that you can get started quickly without having to explain first what charts are, how to create one, what all the files are etc. Con: at some point people will want to switch to the other method (probably), harder to translate from public examples like mybinder.org-deploy.

The second (“dependent”) is what we use to deploy mybinder.org and what I personnaly use 99% of the time for deploying JupyterHubs and BinderHubs. The pro is you are making one neat bundle that you deploy, you are using the same setup as mybinder.org/can more easily copy things Con: you need to understand more about helm charts, more boilerplate

In conclusion: I am not sure what the better method to teach newcomers is. I think that “get them started quickly” is important so maybe what we need is a guide to level you up to “BinderHub helm chart ninja”.

The values in values.yaml (and all the other mybinder.org-deploy files) are all prefixed with the name of the chart they apply to:

binderhub:
  someOption: 42

where as in a config.yaml you are only providing values for one chart (in this case the binderhub chart) so values don’t need a prefix:

someOption: 42

This is also why you find juypterhub as a key in either approaches of config files. This reason for it is that the BinderHub chart depends on the JupyterHub chart.

:turtle: It is turtles charts all the way down…

1 Like

This sounds like it’s becoming a pathways discussion, which I know @KirstieJane will love :slightly_smiling_face:

Perhaps @jhamman could help contribute to some “chart ninja” docs?

I don’t think you can use the Z2JH chart to get a certificate for all of the domains. Also we don’t have any docs for doing this … :frowning: which is a bit embarrassing.

So should my original config work for just the hub.<domain> host? :thinking:

1 Like

What’s the Load Balancer IP? The JHub?

If you run kubectl get svc (for mybinder.org) there is one service of type LoadBalancer. The public IP for that service is 35.202.202.188, which is assigned to us by the cloud provider.

Looks like an nginx-ingress-controller pod. I guess this where I deploy the chart and hope one of those shows up for Hub23?

Also, is this secret auto-generated or do I have to create it? If so, what is it? :smile:

Yes the secret is auto-generated. kube-lego just wants to know what you want it to call the secret.

1 Like

Cool, so the last piece of the puzzle I need to work out is what to do with the stuff in secret.yaml. mybinder and pangeo-binder both seem to have secret/prod.yaml but I can’t copy the layout because they’re encrypted. My first guess would be something like:

binderhub:
  jupyterhub:
    hub:
      services:
        binder:
          apiToken: "<apiToken>"
    proxy:
      secretToken: "<secretToken>"

  registry:
    username: <username>
    password: <password>

since we’re now moving over to the “new chart with a binderhub dependency” system.

1 Like

Just built a working BinderHub from my own local helm chart! :tada:

1 Like

So it doesn’t look like I have HTTPS yet, but I’m not sure what I’m missing?

@jhamman Do you have any advice for getting started with kube-lego?

If you check out the repository locally does it decrypt the content? It should unless you don’t have the key to do so. If you don’t have the key: https://mybinder-sre.readthedocs.io/en/latest/production_environment.html#secrets for instructions and I can send you the key (the crazy perks of being a mybinder.org operator :slight_smile: ).

I’m pretty certain I’ve never been given the key :slight_smile:

Not really. This is something I know basically nothing about. In fact, I think I’ve had to ask @yuvipanda for help 100% of the times it has come up in our deployments.

No worries, thank you :smile:

1 Like

Got this running with cert-manager in Oslo!