Hello jupyterhub team and community,
I have a question about an issue which occurs, if I deploy jupyterhub on my kubernetescluster.
The first thing was, when I deployed jupyterhub, the hub pod stayed ‘pending’ caused by an unbound PersistentVolumeClaim. I found one working solution on this blog.
After creation of the missing StorageClass and PersistentVolume and running an helm upgrade, the hub pod get stuck at a loop of Error and CrashLoopBackoff. In the logs of the hub pod was a long traceback with the last line:
PermissionError: [Errno 13] Permission denied: '/srv/jupyterhub/jupyterhub_cookie_secret'
For this Error I found the solution in this forum in CrashLoopBackOff due to PermissionError: [Errno 13] Permission denied: ‘/srv/jupyterhub/jupyterhub_cookie_secret’.
Here @manics explained short an alternative way, but it is not clear for me how to do this.
It works, but the question for me now is, if there is a correlation between the first issue, where the StorageClass and PersistentVolume has to be created manually and the second issue, where the file ‘/srv/jupyterhub/jupyterhub_cookie_secret’ inside the hub pod cannot be written to.
Particular because of the not-persistence of data for the user pods, it could be more elegant.
A solution for this can be to mount persistent volumes for user data by configuring this in the config.yaml, right?
Information about the setup:
- kubernetes cluster
- bare metal
- OS: Ubuntu 20.04 LTS
- 1 master, 1 worker
- kubernetes version 1.21
- helm
- version 3.6.3
- jupyterhub
- chart version 0.11.1
- app version 1.3.0
I am looking forward for your suggestion.
Best regards
tnecnivkcots