PermissionError [Error 13] after manual creation of PersistentVolume and StorageClass

Hello jupyterhub team and community,

I have a question about an issue which occurs, if I deploy jupyterhub on my kubernetescluster.

The first thing was, when I deployed jupyterhub, the hub pod stayed ‘pending’ caused by an unbound PersistentVolumeClaim. I found one working solution on this blog.
After creation of the missing StorageClass and PersistentVolume and running an helm upgrade, the hub pod get stuck at a loop of Error and CrashLoopBackoff. In the logs of the hub pod was a long traceback with the last line:

PermissionError: [Errno 13] Permission denied: '/srv/jupyterhub/jupyterhub_cookie_secret'

For this Error I found the solution in this forum in CrashLoopBackOff due to PermissionError: [Errno 13] Permission denied: ‘/srv/jupyterhub/jupyterhub_cookie_secret’.
Here @manics explained short an alternative way, but it is not clear for me how to do this.

It works, but the question for me now is, if there is a correlation between the first issue, where the StorageClass and PersistentVolume has to be created manually and the second issue, where the file ‘/srv/jupyterhub/jupyterhub_cookie_secret’ inside the hub pod cannot be written to.
Particular because of the not-persistence of data for the user pods, it could be more elegant.
A solution for this can be to mount persistent volumes for user data by configuring this in the config.yaml, right?

Information about the setup:

  • kubernetes cluster
    • bare metal
    • OS: Ubuntu 20.04 LTS
    • 1 master, 1 worker
    • kubernetes version 1.21
  • helm
    • version 3.6.3
  • jupyterhub
    • chart version 0.11.1
    • app version 1.3.0

I am looking forward for your suggestion.

Best regards
tnecnivkcots

This probably means the persistent volume isn’t writeable by the user inside the hub pod. You might be able to get around this be setting fsGid to match the default group ID on the provisioned volume:

If that doesn’t work you’ll need to check the docs for whichever storage provider(s) you’ve installed since you’re running your own Kubernetes cluster. For instance, it might be possible to change the default permissions.

As there are no groups with GIDs, I have to read more about storage providers. Thank you for the hint.

The Problem was the storage provisioner.
Now I’m running this local storage provisioner and it works fine. Thank you for this hint. Without it I would still wander arround in the dark.

1 Like