Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"

Hello! I have been following the z2jh documentation to deploy a JHub on AWS.

I got as far as launching a kubernetes cluster (following these instructions) and proceeded to install and initialize helm. So far, so good.

When I try to run this step:

ubuntu@ip-172-31-32-95:~/config/jhub-config$ helm upgrade --install $RELEASE jupyterhub/jupyterhub   --namespace $NAMESPACE    --version=0.8.2   --values config.yaml

I get:

Release "jhub" does not exist. Installing it now.
Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"

Does anyone know what this error means? My google-fu is failing me.

Given the presence of batch/v1, and “Job”, I am guessing that maybe it’s related to this template. Could it be that the API version needs to be updated? In other remotely similar cases that seems to help.

Some more info is that the kubectl versions on client and server are not within one minor version from each other:

kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

This may be a red herring too. I am still trying to figure out how to harmonize that, so if anyone knows, feel free to chime in. Otherwise, I will tell you when I figure it out.

I think the version difference shouldn’t matter. To update your local/client version you need to install a newer version of kubectl (in the same way you installed the original but with a new version).

Maybe you can disable the pre-puller in the config and deploy again to see what happens. WIth the pre-puller disabled this Job won’t be deployed so we should see the error go away.

Thanks @betatim! How do I disable the pre-puller in the config? To be clear, at the point at which I am issuing this helm upgrade my config file includes only the proxy/secretToken field.

OK: I think I found it, here

I can confirm that following these instruction indeed made my error go away. I’ll report back here if I run into any issues downstream from this.