I’m seeing an issue when trying to respawn Jupyterlab pod, when moving to a different nodepool which is present in a different zone. Specifically, the error is in regards to moving PVC across zones. Exact error with which spawn is failing is (on Google Cloud): 2023-03-02T18:34:23.027110Z [Warning] 0/144 nodes are available: 134 Insufficient cpu, 141 Insufficient memory, 141 Insufficient nvidia.com/gpu, 3 node(s) had volume node affinity conflict.
I understand this is because PVCs can’t be moved across zones. Are there any suggestions with respect to moving the PVCs around? Any suggestions would be really be helpful
I’m not aware of an easy solution for this problem, other than running the k8s cluster in a single zone, or using annotations so that pods requiring block storage are restricted to a single zone. You’re not really loosing any resilience by using a single zone since as you’ve found once the PVC is created you’re limited to that zone anyway.
The alternative is to use a volume provisioner that works across zones. The K8s docs mentions replication-type: regional-pd for Google persistent disks but I don’t know if that helps.
Otherwise you could look at other provisioners such as NFS, or one of the many provisioners that use object storage as a backend but present it as file storage.
Thanks @manics for the quick response. Really appreciate it!
We have to move across zone as some zones don’t support a nodepool requirement. From my understanding, this will also cause problems when deploying on other cloud providers (AWS, Azure) as there might be a cap on the resources of the cluster and re-spawning of the pod might get allocated to a different zone.
Is there a nice way to delete the PVC (using kubespawner or some built-in API) so that when spawning in a new zone, it would create one on it’s own?