[WIP] Documentation about cert-manager

Setting up HTTPS

To setup secure HTTPS communication, as you should, you need to have a certificate associated with your domain from a Certificate Authority (CA). Thankfully, there is Let’s Encrypt which is a widely trusted CA that is also free to request certificates from.

But, the CAs won’t just give away certificates. CAs require a proof of domain ownership through a challange. This is technically cumbersome and often automated by services like kube-lego and cert-manager.

With a certificate in place, there still need some place where encrypted traffic is decrypted and the other way around, this is often referred to as TLS termination.

HTTPS vs TLS
HTTPS is the secured version of HTTP: HyperText Transfer Protocol. HTTP is the protocol used by your browser and web servers to communicate and exchange information. When that exchange of data is encrypted with SSL/TLS , then we call it HTTPS . The S stands for Secure.

TLS termination, the transition from encrypted to unencrypted traffic, can be done in various locations. A common place in the world of Kubernetes is to let this be managed by Ingress controllers. An Ingress controller is something that acts to make Kubernetes Ingress resources come to life. It is possible to use a ingress controller made available by your cloud provider, or an ingress controller that you have hosted yourself in the kubernetes cluster, such as nginx-ingress.

Below is a guide on how to use cert-manager along with nginx-ingress, both of which can be installed as helm charts.

How cert-manager works

cert-manager looks at Kubernetes Ingress resources. For the Ingress resources it finds, it further look at the annotations of the ingress resource. Do they indicate that they want help by cert-manager? If so, cert-manager will try to provide a certificate from Let’s Encrypt!

An example of an annotation that would indicate for cert-manager that it should help out, is:

kubernetes.io/tls-acme: "true"

If it decides to help out the Ingress resource, cert-manager will further look at the ingress object. What host or domains does it want to handle traffic to? From what Kubernetes secret resource does it want to read the certificate from? It will also combine this information with potential default settings, like what Issuer to use.

An Issuer is a cert-manager concept, it describes what certificate authority to speak with, and for example what email to provide as a contact person, and what kind of challenge to use. This can be configured in the cert-manager’s Helm chart values.

# example configuration of cert-manager default values
# on how to go about getting certificates.
cert-manager:
  ingressShim:
    defaultIssuerName: "my-manually-created-issuer-resource"
    defaultIssuerKind: "Issuer"
    defaultACMEChallengeType: "http01"

(INCOMPLETE BELOW) How to setup use of cert-manager

Assumptions

  1. We use the nginx-ingress helm chart, which contains a Ingress controller.
  2. We use a parent helm chart to deploy JupyterHub

Helm Chart File Structure

local-helm-chart/
|-files/
| |-etc/
| | |-jupyter/
| | | |-templates/
| | | | |-login.html
| | | | |-page.html
| | | |
| | | |-jupyter_notebook_config.py
|
|-templates/
| |-clusterissuer.yaml
| |-user-configmap.yaml
| |-_helpers.tpl
|
|-Chart.yaml
|-requirements.yaml
|-values.yaml

Installation/Setup

Step 1) Install the Custom Resource Definitions (CRDs).

kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.10/deploy/manifests/00-crds.yaml

Step 2) Create clusterissuer.yaml file.

---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: prod
  labels:
    helm.sh/chart: {{ include "prod.chart" . }}
    app.kubernetes.io/name: {{ include "prod.name" . }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    app.kubernetes.io/instance: {{ .Release.Name }}
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: {{ .Values.letsencrypt.contactEmail }}
    privateKeySecretRef:
      name: prod-acme-key
    http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: staging
  labels:
    helm.sh/chart: {{ include "staging.chart" . }}
    app.kubernetes.io/name: {{ include "staging.name" . }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
    app.kubernetes.io/instance: {{ .Release.Name }}
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: {{ .Values.letsencrypt.contactEmail }}
    privateKeySecretRef:
      name: staging-acme-key
    http01: {}

Step 3) Create user-configmap.yaml file.

kind: ConfigMap
apiVersion: v1
metadata:
  name: user-etc-jupyter
  labels:
    app: jupyterhub
    component: etc-jupyter
    heritage: {{ .Release.Service }}
    release: {{ .Release.Name }}

data:
  {{- range $name, $content := .Values.etcJupyter }}
  {{- if eq (typeOf $content) "string" }}
  {{ $name }}: |
    {{- $content | nindent 4 }}
  {{- else }}
  {{ $name }}: {{ $content | toJson | quote }}
  {{- end }}
  {{- end }}
  {{- (.Files.Glob "files/etc/jupyter/*").AsConfig | nindent 2 }}
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: user-etc-jupyter-templates
  labels:
    app: jupyterhub
    component: etc-jupyter
    heritage: {{ .Release.Service }}
    release: {{ .Release.Name }}
data:
  {{- (.Files.Glob "files/etc/jupyter/templates/*").AsConfig | nindent 2 }}

Step 4) Create _helpers.tpl file.

{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "hub23-chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "hub23-chart.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "hub23-chart.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

Step 5) Create login.html file.

{% extends "templates/login.html" %}
{% block site %}

<div id="ipython-main-app" class="container">
  <h1>Binder inaccessible</h1>
  <h2>
    You can get a new Binder for this repo by clicking <a href="{{binder_url}}">here</a>.
  </h2>
  <p>
    The shareable URL for this repo is: <tt>{{binder_url}}</tt>
  </p>

  <h4>Is this a Binder that you created?</h4>
  <p>
    If so, your authentication cookie for this Binder has been deleted or expired.
    You can launch a new Binder for this repo by clicking <a href="{{binder_url}}">here</a>.
  </p>

  <h4>Did someone give you this Binder link?</h4>
  <p>
    If so, the link is outdated or incorrect.
    Recheck the link for typos or ask the person who gave you the link for an updated link.
    A shareable Binder link should look like <tt>{{binder_url}}</tt>.
</div>
{% endblock site %}

Step 6) Create page.html file.

{% extends "templates/page.html" %}
{% block login_widget %}{% endblock %}

Step 7) Create jupyter_notebook_config.py file.

import os
c.NotebookApp.extra_template_paths.append('/etc/jupyter/templates')
c.NotebookApp.jinja_template_vars.update({
    'binder_url': os.environ.get('BINDER_URL', 'https://mybinder.org'),
})

Step 8) Add nginx-ingress and cert-manager dependencies to requirements.yaml.

dependencies:
  # https://github.com/helm/charts/tree/master/stable/nginx-ingress
  - name: nginx-ingress
    version: "1.19.0"
    repository: "https://kubernetes-charts.storage.googleapis.com"

  # https://github.com/helm/charts/tree/master/stable/cert-manager
  - name: cert-manager
    version: "v0.10.0"
    repository: "https://charts.jetstack.io"

Check repos for most up-to-date version.

Step 9) Add the cert-manager helm repo.

helm repo add cert-manager https://charts.jetstack.io

Step 10) Add email address for Let’s Encrypt to values.yaml file.

letsencrypt:
  contactEmail: YOUR-EMAIL

Perform a helm upgrade to install the new dependencies without affecting the cluster.

Step 11) Set cert-manager defaults in values.yaml. Start with staging for testing.

cert-manager:
  ingressShim:
    defaultIssuerName: "staging"
    defaultIssuerKind: "ClusterIssuer"
    defaultACMEChallengeType: "http01"

Step 11.5) OPTIONAL. Set nginx-ingress defaults in values.yaml.

nginx-ingress:
  controller:
    service:
      loadBalancerIP: "EXTERNAL_IP"
    config:
      proxy-body-size: 64m

To find the EXTERNAL_IP, run kubectl get svc --namespace NAMESPACE and inspect the nginx-ingress-controller of type LoadBalancer.

Step 12) Perform a helm upgrade.

Step 13) Enable ingress, annotations, hosts and TLS for hub and binder in values.yaml file.

binderhub:
  ingress:
    enabled: true
    annotations:
      # cert-manager provides a TLS secret
      # This will ask cert-manager to be configured with default values. It's better to configure default values.
      kubernetes.io/tls-acme: "true"
      # nginx-ingress controller to be explicitly utilised instead of "gce"
      # This is required to allow nginx-ingress-controller to function. This will override any cloud provided ingress controllers and use the one we choose to deploy, i.e. nginx.
      kubernetes.io/ingress.class: nginx
    hosts:
      - YOUR-BINDER-HOST-DOMAIN
    tls:
      - secretName: binder-tls
        hosts:
          - YOUR-BINDER-HOST-DOMAIN

  jupyterhub:
    ingress:
      enabled: true
      annotations:
        kubernetes.io/tls-acme: "true"
        kubernetes.io/ingress.class: nginx
      hosts:
        - YOUR-JUPYTERHUB-HOST-DOMAIN
      tls:
        - secretName: hub-tls
          hosts:
            - YOUR-JUPYTERHUB-HOST-DOMAIN

Step 14) Switch binder and hub services to use cluster IPs.

binderhub:
  service:
    type: ClusterIP
  jupyterhub:
    proxy:
      service:
        type: ClusterIP

This is where the nodePorts error crops up!

OPTIONAL. Lower TTL of A records before changing the DNS to reduce propagation time.

Step 15) Perform a helm upgrade to check the dummy certificates work.

Step 16) Switch to the prod clusterissuer in values.yaml file.

cert-manager:
  ingressShim:
    defaultIssuerName: "prod"

Step 17) Perform a helm upgrade to enable HTTPS on your BinderHub.