I have no idea why. I used the HELM Deployment. It should take care of all the stuff needed to run Jupyter. If there is a permission error in a spwaning pod, then it seems to be a bug.
We use Jupyterhub 3.0.0.
It runs on a Kubernets Cluster provided from the HP Ezmeral Container Platform on a BlueData Datafabric.
What I do is simple. I deployed the HELM, that was easy, and changed the Proxy from LoadBalancer to NodePort. But this shouldnt be an issue, it is just needed, because we have a dedicatged Proxy on a machine. Then all the Pods spawn without issues, also the Hub. But after I login,the system tells me that “your server is starting up” and then the error happens.
There’s still not enough information to investigate the issue, but my best guess from your permissions error message is that you’re mounting a volume into the user pod, but it’s got the wrong permissions. This is something that should be handled by your Kubernetes volume provisioners.
It may require extra configuration, e.g. on many public cloud providers you need to set singleuser.fsGid but you’ll need to check the documentation for your K8s cluster, or for your volume provisioner if it’s installed separately.
Failing that the brute force approach is to add an init container that runs as root and runs a recursive chown to fix the permissions.