Bug description
Previously, Jupyterhub on EKS 1.26. Now we upgraded the EKS version to 1.27. All the components are up and running (pods, services). I can see the sign in page. But once I login, the user pod and PVC are in pending state and not able to launch the jupyter server.
Below is the error from scheduler pod
W0111 05:33:10.904864 1 reflector.go:324] [k8s.io/client-go/informers/factory.go:134](http://k8s.io/client-go/informers/factory.go:134): failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
E0111 05:33:10.904941 1 reflector.go:138] [k8s.io/client-go/informers/factory.go:134](http://k8s.io/client-go/informers/factory.go:134): Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
So we upgraded kube-scheduler to 1.26 version. Now scheduler pod itself is erroring out with this error
1 run.go:74] "command failed" err="couldn't create resource lock: endpoints lock is removed, migrate to
endpointsleases"
How to reproduce
Deploy JupyterHub helm chart 1.1.3 (App version 2.3.1) on eks 1.26 version. Upgrade eks to 1.27 version
Expected behaviour
All pods should be up running
Actual behaviour
Scheduler pod doesn’t come up