And this path and templates must exist in jupyterhub container. There are 2 ways (that I am aware of) to do this. First you can extend the jupyterhub image by copying the templates and use this image in your config. But this means that you have to create a new image everytime you upgrade your jupyterhub. Second way, which I prefer, you can use initContainers, by which you can clone/download your custom templates into a volume and then you can mount this volume into the hub container. Here is an example config for that:
This config clones the repo (https://github.com/your/repo.git, you have to change this with the url of repo where your custom templates are) into /etc/jupyterhub/custom folder, to where the custom-templates volume is mounted. The same volume will be mounted into hub container too (see extraVolumeMounts config). Finally we have to tell jupyterhub where to find custom templates, for that we have to set c.JupyterHub.template_paths as mentioned before. For example if your templates exist in jupyterhub/templates folder in your repo, then set it to ['/etc/jupyterhub/custom/jupyterhub/templates'] as I did in the example config.
Another perhaps more complicated option is to mount configmaps with the template files, the benefit is that you don’t need to have an init container do stuff but instead mounts things directly assuming you have these files available on helm upgrade etc and some additional configuration work.
To use this approach, do something like below. Note that this is a WIKI post allowing you to edit it if you find something to correct or add, please feel free to do so!
use a custom helm chart that requires the jupyterhub helm chart
add some template files in a local folder alongside helm chart configuration files
spawn.html - In the helm chart that has a requirements.yaml declaring jupyterhub as a dependency, this file could for example be added in files/etc/jupyterhub/templates/. Note that it references an image in /hub/static/external/my-custom-image.svg, this needs also to be mounted for use and that is done if you place it within files/static/external/ assuming the use of the configmaps presented in this example.
Thank you @consideRatio and @bitnik for your detailed examples and explanations! I will try them out.
Could I ask how you knew what values to put under extraVolumes or extraVolumeMounts? I looked over values.yaml but didn’t know what parameters to put (e.g.name and configMap under extraVolumes).
I had to depended on previous knowledge about Kubernetes, Helm, and inspiration from mybinder.org’s configuration. It took quite a while to get this right for me.
The question I asked myself was what content would the webserver that JupyterHub use try to serve if we wrote that? I figured out by some testing and source code inspection, that something within a HTML reference to /hub will map to content on a hard drive where JupyterHub’s data_files_path is configured to point to. I did not reconfigure this location but instead used the default value, which was /usr/local/share/jupyterhub/, and placed a folder of stuff within it using a ConfigMap.
Making extraConfig a dictionary of key value pairs allows extraConfig information from one file to be merged with another better. This could be useful if you have two config.yaml files, one for example being secret-config.yaml, and one being non-secret-config.yaml, and both wanted to add some extraConfig. If both these files had written a string value to extraConfig like we actually do using the | symbol in the YAML syntax, they would override instead of merge in a meaningful way. By introducing a key/value pair in between we can avoid this. That is the only purpose it serves though, so you can name it whatever you like. When the configuration snippets is executed listed under extraConfig, it will be done so in an alphabetical order based on the key name.
extraConfig:
config1: |
print("will execute first")
print("PS: this is Python")
config2: |
print("will execute second")
I tried using an init container as you said. I can tell that /etc/jupyterhub custom is created, nothing is inside it. Maybe it cloned the files incorrectly?
I also tried @consideRatio’s solution with slightly different folder paths and reached a similar problem (although I may have botched the helm install of the chart, will try again soon). So it’s most likely a problem on my end?
There are some extra spaces in hub.initContainers.args (after clone command). Actually this should make git-clone-templates container fail and hub shouldn’t start at all. And also with that config you should set c.JupyterHub.template_paths to ['/etc/jupyterhub/custom']. Here is updated version of your config:
My bad, I initially wrote c.JupyterHub.template_paths = ['/etc/jupyterhub/custom']; I changed it to c.JupyterHub.template_paths = ['/etc/jupyterhub/custom/jupyterhub/templates'] while trying to debug. The extra spaces might have come from me copying from a backup file that wasn’t .yaml.
I tried using your config, nothing seems to be in /etc/jupyterhub/custom in the hub- pod still.
The custom folder seems to be mounted (although I’m not sure if that’s the right location):
But you have to also go through what is changed from 0.8.2 to 0.9-b63f5c9 (e.g. JupyterHub version is upgraded from 0.9.4 to 1.0.0) and update your configuration.
hey bro, I am following the discussion you guys had above but still do no get that working and the folder /etc/jupyterhub/custom does not appear…Would you mind telling me what command do you use to activate the modified config.yaml? Is it something like this:
Thank you for your reply! I was using the same config file as yours shown below and in order to simplify, I only define two keys, proxy and hub. Then I was using the command helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version 0.9-2d435d6 --values config.yaml to activate this config file.
Release "jhub" has been upgraded.
LAST DEPLOYED: Tue Oct 1 21:07:04 2019
NAMESPACE: jhub
STATUS: DEPLOYED
RESOURCES:
==> v1/ClusterRole
NAME AGE
jhub-user-scheduler-complementary 12h
==> v1/ClusterRoleBinding
NAME AGE
jhub-user-scheduler-base 12h
jhub-user-scheduler-complementary 12h
==> v1/ConfigMap
NAME DATA AGE
hub-config 1 12h
user-scheduler 1 12h
==> v1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
continuous-image-puller 3 3 3 3 3 <none> 12h
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
hub 1/1 1 1 12h
proxy 1/1 1 1 12h
user-scheduler 2/2 2 2 12h
==> v1/PersistentVolumeClaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hub-db-dir Bound pvc-5f091726-e3e5-11e9-beb2-42010af00204 1Gi RWO standard 12h
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
continuous-image-puller-f567h 1/1 Running 0 12h
continuous-image-puller-jrlq8 1/1 Running 0 12h
continuous-image-puller-tjn56 1/1 Running 0 12h
hub-d56695869-h5m45 1/1 Running 0 12h
proxy-f54886f9d-rskwm 1/1 Running 0 12h
user-scheduler-b7db6b677-lv7kg 1/1 Running 0 12h
user-scheduler-b7db6b677-p9drn 1/1 Running 0 12h
==> v1/Role
NAME AGE
hub 12h
==> v1/RoleBinding
NAME AGE
hub 12h
==> v1/Secret
NAME TYPE DATA AGE
hub-secret Opaque 2 12h
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hub ClusterIP 10.0.10.209 <none> 8081/TCP 12h
proxy-api ClusterIP 10.0.10.213 <none> 8001/TCP 12h
proxy-public LoadBalancer 10.0.2.67 36.223.155.214 80:32237/TCP,443:32317/TCP 12h
==> v1/ServiceAccount
NAME SECRETS AGE
hub 1 12h
user-scheduler 1 12h
==> v1/StatefulSet
NAME READY AGE
user-placeholder 0/0 12h
==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
hub 1 N/A 0 12h
proxy 1 N/A 0 12h
user-placeholder 0 N/A 0 12h
user-scheduler 1 N/A 1 12h
NOTES:
Thank you for installing JupyterHub!
Your release is named jhub and installed into the namespace jhub.
You can find if the hub and proxy is ready by doing:
kubectl --namespace=jhub get pod
and watching for both those pods to be in status 'Running'.
You can find the public IP of the JupyterHub by doing:
kubectl --namespace=jhub get svc proxy-public
It might take a few minutes for it to appear!
Note that this is still an alpha release! If you have questions, feel free to
1. Read the guide at https://z2jh.jupyter.org
2. Chat with us at https://gitter.im/jupyterhub/jupyterhub
3. File issues at https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues
Unfortunately, there is still nothing in /home/test-templates/ and there is nothing change with login page. BTW, I was using google cloud kubernetes. Any kind suggestions?
Hi everyone,
I am new to configuring Jhub. I am looking to customize Jhub login on a Kubernetes cluster. I have followed the method described by @bitnik. When I check the hub I see my templates directory with all the files but when I check in jupyterhub_config.py, I see the template_paths is not set which may explain why I am not seeing any changes to my login? This is a snippet from the config.py on the hub pod
for trait, cfg_key in (
# Max number of servers that can be spawning at any one time
('concurrent_spawn_limit', None),
# Max number of servers to be running at one time
('active_server_limit', None),
# base url prefix
('base_url', None),
('allow_named_servers', None),
('named_server_limit_per_user', None),
('authenticate_prometheus', None),
('redirect_to_server', None),
('shutdown_on_logout', None),
*('template_paths', None),*
('template_vars', None),
):
@bitnik thank you for taking a look. If I understand well, you are asking whether my templates are at the root of my repo? The answer is no, they are one level from the root i.e /jupyterhub/templates/ and my files , including the edited login.html are in the templates sub directory