Helm Chart File Structure
local-helm-chart/
|-files/
| |-etc/
| | |-jupyter/
| | | |-templates/
| | | | |-login.html
| | | | |-page.html
| | | |
| | | |-jupyter_notebook_config.py
|
|-templates/
| |-clusterissuer.yaml
| |-user-configmap.yaml
| |-_helpers.tpl
|
|-Chart.yaml
|-requirements.yaml
|-values.yaml
Installation/Setup
Step 1) Install the Custom Resource Definitions (CRDs).
kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.10/deploy/manifests/00-crds.yaml
Step 2) Create clusterissuer.yaml
file.
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: prod
labels:
helm.sh/chart: {{ include "prod.chart" . }}
app.kubernetes.io/name: {{ include "prod.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ .Values.letsencrypt.contactEmail }}
privateKeySecretRef:
name: prod-acme-key
http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: staging
labels:
helm.sh/chart: {{ include "staging.chart" . }}
app.kubernetes.io/name: {{ include "staging.name" . }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: {{ .Values.letsencrypt.contactEmail }}
privateKeySecretRef:
name: staging-acme-key
http01: {}
Step 3) Create user-configmap.yaml
file.
kind: ConfigMap
apiVersion: v1
metadata:
name: user-etc-jupyter
labels:
app: jupyterhub
component: etc-jupyter
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
{{- range $name, $content := .Values.etcJupyter }}
{{- if eq (typeOf $content) "string" }}
{{ $name }}: |
{{- $content | nindent 4 }}
{{- else }}
{{ $name }}: {{ $content | toJson | quote }}
{{- end }}
{{- end }}
{{- (.Files.Glob "files/etc/jupyter/*").AsConfig | nindent 2 }}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: user-etc-jupyter-templates
labels:
app: jupyterhub
component: etc-jupyter
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
{{- (.Files.Glob "files/etc/jupyter/templates/*").AsConfig | nindent 2 }}
Step 4) Create _helpers.tpl
file.
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "hub23-chart.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "hub23-chart.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "hub23-chart.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
Step 5) Create login.html
file.
{% extends "templates/login.html" %}
{% block site %}
<div id="ipython-main-app" class="container">
<h1>Binder inaccessible</h1>
<h2>
You can get a new Binder for this repo by clicking <a href="{{binder_url}}">here</a>.
</h2>
<p>
The shareable URL for this repo is: <tt>{{binder_url}}</tt>
</p>
<h4>Is this a Binder that you created?</h4>
<p>
If so, your authentication cookie for this Binder has been deleted or expired.
You can launch a new Binder for this repo by clicking <a href="{{binder_url}}">here</a>.
</p>
<h4>Did someone give you this Binder link?</h4>
<p>
If so, the link is outdated or incorrect.
Recheck the link for typos or ask the person who gave you the link for an updated link.
A shareable Binder link should look like <tt>{{binder_url}}</tt>.
</div>
{% endblock site %}
Step 6) Create page.html
file.
{% extends "templates/page.html" %}
{% block login_widget %}{% endblock %}
Step 7) Create jupyter_notebook_config.py
file.
import os
c.NotebookApp.extra_template_paths.append('/etc/jupyter/templates')
c.NotebookApp.jinja_template_vars.update({
'binder_url': os.environ.get('BINDER_URL', 'https://mybinder.org'),
})
Step 8) Add nginx-ingress
and cert-manager
dependencies to requirements.yaml
.
dependencies:
# https://github.com/helm/charts/tree/master/stable/nginx-ingress
- name: nginx-ingress
version: "1.19.0"
repository: "https://kubernetes-charts.storage.googleapis.com"
# https://github.com/helm/charts/tree/master/stable/cert-manager
- name: cert-manager
version: "v0.10.0"
repository: "https://charts.jetstack.io"
Check repos for most up-to-date version.
Step 9) Add the cert-manager
helm repo.
helm repo add cert-manager https://charts.jetstack.io
Step 10) Add email address for Let’s Encrypt to values.yaml
file.
letsencrypt:
contactEmail: YOUR-EMAIL
Perform a helm upgrade to install the new dependencies without affecting the cluster.
Step 11) Set cert-manager
defaults in values.yaml
. Start with staging for testing.
cert-manager:
ingressShim:
defaultIssuerName: "staging"
defaultIssuerKind: "ClusterIssuer"
defaultACMEChallengeType: "http01"
Step 11.5) OPTIONAL. Set nginx-ingress
defaults in values.yaml
.
nginx-ingress:
controller:
service:
loadBalancerIP: "EXTERNAL_IP"
config:
proxy-body-size: 64m
To find the EXTERNAL_IP
, run kubectl get svc --namespace NAMESPACE
and inspect the nginx-ingress-controller
of type LoadBalancer
.
Step 12) Perform a helm upgrade.
Step 13) Enable ingress, annotations, hosts and TLS for hub and binder in values.yaml
file.
binderhub:
ingress:
enabled: true
annotations:
# cert-manager provides a TLS secret
# This will ask cert-manager to be configured with default values. It's better to configure default values.
kubernetes.io/tls-acme: "true"
# nginx-ingress controller to be explicitly utilised instead of "gce"
# This is required to allow nginx-ingress-controller to function. This will override any cloud provided ingress controllers and use the one we choose to deploy, i.e. nginx.
kubernetes.io/ingress.class: nginx
hosts:
- YOUR-BINDER-HOST-DOMAIN
tls:
- secretName: binder-tls
hosts:
- YOUR-BINDER-HOST-DOMAIN
jupyterhub:
ingress:
enabled: true
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: nginx
hosts:
- YOUR-JUPYTERHUB-HOST-DOMAIN
tls:
- secretName: hub-tls
hosts:
- YOUR-JUPYTERHUB-HOST-DOMAIN
Step 14) Switch binder and hub services to use cluster IPs.
binderhub:
service:
type: ClusterIP
jupyterhub:
proxy:
service:
type: ClusterIP
This is where the nodePorts
error crops up!
OPTIONAL. Lower TTL of A records before changing the DNS to reduce propagation time.
Step 15) Perform a helm upgrade to check the dummy certificates work.
Step 16) Switch to the prod
clusterissuer in values.yaml
file.
cert-manager:
ingressShim:
defaultIssuerName: "prod"
Step 17) Perform a helm upgrade to enable HTTPS on your BinderHub.