Not sure if this will help, but I think that error comes from an old version or either the helm chart or k8s:
I’m setting up a local environment for JupyterHub testing using Kubernetes with Docker. Following the ZTJH instructions to setup the hub, I can’t spawn user pods, as they simply fail to start.
Describing the pods reveals that each one is considered “unhealthy”. The output is attached below.
Name: continuous-image-puller-4sxdg
Namespace: ztjh
Priority: 0
Service Account: default
Node: docker-desktop/192.168.65.4
Start Time: Wed, 11 Jan 2023 11:53:39…
opened 07:01AM - 28 Sep 23 UTC
closed 01:49AM - 15 Feb 24 UTC
question
resolution/answer-provided
**Describe scenario**
Use helm in AKS cluster to deploy ClickHouse cluster.
… **Question**
#### My environment:
AKS Kubernetes Version : 1.25.5/1.25.11
AKS pricing tier : Free
Authentication and Authorization : Local accounts with Kubernetes RBAC
Helm Version : 3.9.0
#### My deployment steps are:
1. Add Helm repository for RadonDB
$ helm repo add ck https://radondb.github.io/radondb-clickhouse-kubernetes/
$ helm repo update
2. Deploy the RadonDB ClickHouse Operator
$ helm install clickhouse-operator ck/clickhouse-operator
3. Deploy the RadonDB ClickHouse cluster
$ helm install clickhouse ck/clickhouse-cluster
#### Error Logs
```
E0928 13:34:32.356720 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:34:37.208117 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:34:48.819160 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:35:04.364824 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:35:39.713061 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:36:19.258708 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:37:03.330866 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:37:41.826650 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:38:20.623082 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:39:10.998654 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:39:47.558456 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:40:23.656921 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:41:02.453900 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:41:41.370270 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:42:37.333736 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:43:28.241650 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:44:02.994747 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:44:51.217722 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:45:23.607666 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:46:11.457417 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:46:43.237192 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:47:22.287988 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:47:57.490796 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:48:35.850087 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:49:21.771561 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:49:59.381494 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:50:53.042610 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
W0928 13:51:26.001364 1 reflector.go:436] pkg/client/informers/externalversions/factory.go:117: watch of *v1.ClickHouseOperatorConfiguration ended with: an error on the server ("unable to decode an event from the watch stream: unable to decode watch event: no kind \"ClickHouseOperatorConfiguration\" is registered for version \"clickhouse.radondb.com/v1\" in scheme \"pkg/client/clientset/versioned/scheme/register.go:30\"") has prevented the request from succeeding
E0928 13:51:31.959489 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:52:31.270767 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:53:03.503742 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:53:40.176138 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:54:30.611796 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
E0928 13:55:09.960395 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource
```
#### Others
Deploying in other Kubernetes clusters that are not AKS works fine.
Not sure about your infra, but maybe you can try to update it and see what happens?
Thanks @IvanYingX
I don’t see a direct way to downgrade the K8S version on DigitalOcean (it’s currently at 1.30.4).
So if indeed the failed to list *v1beta1.PodDisruptionBudget
error is arising because I’m running a K8S version > 1.24, (per Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource · Issue #3926 · Azure/AKS · GitHub ) … I’m wondering what the solution is.
I tried to use the latest dev version of the jupyterhub helm chart in the following way:
helm upgrade --cleanup-on-fail \
--install helm-jh-test jupyterhub/jupyterhub --version 4.0.0-0.dev.git.6717.h61ab116 \
--namespace my-jh-test \
--create-namespace \
--version=1 \
--values config.yaml
But I but can still see the error.
Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource` in the log of the user-scheduler.
Wondering if there’s a way to inspect the helm chart to understand exactly which version of K8S should be used.
I was able to get notebooks to finish spawning with the following combination of helm command and config.yaml. I’m not sure if disabling the userScheduler is going to be an issue down the road…
helm upgrade --cleanup-on-fail \
--install helm-jh-test jupyterhub/jupyterhub --version 4.0.0-0.dev.git.6717.h61ab116 \
--namespace my-jh-test \
--create-namespace \
--version=1 \
--values config.yaml
config.yaml:
debug:
enabled: true
scheduling:
userScheduler:
enabled: false
hub:
config:
JupyterHub:
authenticator_class: dummy
log_level: DEBUG
Authenticator:
admin_users:
- daniel
allowed_users:
- student1
DummyAuthenticator:
password: (redacted)
networkPolicy:
egress:
- ports:
- port: 6443
- port: 443
singleuser:
# Setting a global start_timeout instead of 'per profile'
startTimeout: 3600
manics
September 4, 2024, 7:22am
24
If the pod hasn’t started there won’t be any logs, since they’re generated by the application running in the pod.
kubectl describe pod <pod name>
Will often contain clues as to why the pod can’t be started. Can you share the output here?
1 Like
@manics Thanks for your response. Per the note above, I did see the following error on the “hub” pod log:
Failed to watch *v1beta1.PodDisruptionBudget: failed to list
*v1beta1.PodDisruptionBudget: the server could not find the
requested resource` in the log of the user-scheduler.
…which @IvanYingX helped me understand is probably a versioning issue, since in later versions the v1beta1.PodDisruptionBudget method is not longer available?
Therefore I used the latest version of the helm chart 4.0.0-0.dev.git.6717.h61ab116 and the config file with userScheduling disabled (see above), and I can finally get it to work on DigitalOcean!
Not sure if this is helpful to anyone, but I noticed that in the ZTJH docs for getting started on DigitalOcean, the command to spin up the K8S cluster uses the default sized nodes. But when I tried that, the notebook would hang during creation:
However, when I launched the cluster with larger nodes, that error went away and I was able to launch a notebook.
Command:
doctl k8s cluster create jupyter-kubernetes --region syd1 --node-pool="name=worker-pool;size=s-2vcpu-4gb;count=3"
Log when creating a Notebook: