First of all, everything is working and I am just puzzled at how it works.
According to https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/f11f8ea8ea857917f59b7f9a1b79a9570e21e622/jupyterhub/values.yaml#L653-L668, I am able to specify my own cull values like
cull:
timeout: 600
every: 60
users: true
adminUsers: true
And when I installed the jupyterhub helm chart and from the hub pod (using kubectl debug), I can see the process
hub-77485c4b79-5d6ql:~# ps aux
PID USER TIME COMMAND
1 1000 0:00 tini -- jupyterhub --config /usr/local/etc/jupyterhub/jupyterhub_config.py --upgrade-db
7 1000 0:40 {jupyterhub} /usr/local/bin/python /usr/local/bin/jupyterhub --config /usr/local/etc/jupyterhub/jupyterhub_config.py --upgrade-db
11 1000 0:02 python3 -m jupyterhub_idle_culler --url=http://localhost:8081/hub/api --timeout=600 --cull-every=60 --concurrency=10 --cull-users
427 root 0:00 bash
436 root 0:00 ps aux
Process 11 is exactly what I expect to see with my customized parameters.
However, I am very confused on how these parameters get into my pod. I understand that we have a configmap called hub that got mounted as volumes in /usr/local/etc/jupyterhub/config/. By inspecting the data of either the configmap or the mounted folder, I cannot find the cull parameters. Furthermore, I even read the helm chart itself and I found no use of those cull parameters and also confirmed that no such reference exists in the generate manifest.
So how did my parameters get into the hub workload, seemingly magically?
I would really appreciate some insights. This is very cool.