Jupyter on Kubernetes doesnt spawn Jupyter Pods for Users beacuase of an Permission Error

I installed Jupyterhub via HELM. All worked fine. But when I want to spawn Users, the system spawns a new Pod, but that crashes with this error:

I have no idea why. I used the HELM Deployment. It should take care of all the stuff needed to run Jupyter. If there is a permission error in a spwaning pod, then it seems to be a bug.

We use Jupyterhub 3.0.0.
It runs on a Kubernets Cluster provided from the HP Ezmeral Container Platform on a BlueData Datafabric.

I uploaded more info here, but my Issue was closed with the info I should post here. But still all info is still viewable: Jupyter on Kubernetes doesnt spawn Jupyter Pods for Users beacuase of an Permission Errro · Issue #4414 · jupyterhub/jupyterhub · GitHub

Are you using the official Helm Chart Zero to JupyterHub with Kubernetes — Zero to JupyterHub with Kubernetes documentation or have you writting your own chart?

Can you give us sufficient information to reproduce your problem? Does your singleuser container work when you run it on it’s own, without JupyterHub?

No, I use this HELM Script:

What I do is simple. I deployed the HELM, that was easy, and changed the Proxy from LoadBalancer to NodePort. But this shouldnt be an issue, it is just needed, because we have a dedicatged Proxy on a machine. Then all the Pods spawn without issues, also the Hub. But after I login,the system tells me that “your server is starting up” and then the error happens.

There’s still not enough information to investigate the issue, but my best guess from your permissions error message is that you’re mounting a volume into the user pod, but it’s got the wrong permissions. This is something that should be handled by your Kubernetes volume provisioners.

It may require extra configuration, e.g. on many public cloud providers you need to set singleuser.fsGid but you’ll need to check the documentation for your K8s cluster, or for your volume provisioner if it’s installed separately.

Failing that the brute force approach is to add an init container that runs as root and runs a recursive chown to fix the permissions.

Tell me pls what info you need.

Just in case somebody has the same problem like I do.

Line 87 in the values.yaml:
fsGroup: 100

Also changed below the runAsUser and runAsGroup to 1000.

There was no GID 100 on my Redhat installation.