Any chance to adjust jupyterhub via helm chart values so that it does not try to create any clusterroles?

Hi, Team,
I’m trying to install JupyterHub See installation instructions for: | JupyterHub’s Helm chart repository following the way of GitHub - fluxcd/flux2-multi-tenancy: Manage multi-tenant clusters with Flux as a tenant, but got this error, any tips?

{“level”:“error”,“ts”:“2021-10-05T16:18:08.649Z”,“logger”:“controller.helmrelease”,“msg”:“Reconciler error”,“reconciler group”:“”,“reconciler kind”:“HelmRelease”,“name”:“jupyterhub”,“namespace”:“ns-ml”,“error”:“Helm install failed: rendered manifests contain a resource that already exists. Unable to continue with install: could not get information about the resource: “jupyterhub-user-scheduler” is forbidden: User “system:serviceaccount:ns-ml:ml” cannot get resource “clusterroles” in API group “” at the cluster scope”}

I understand this situation.

“This may need to be deployed through an infrastructure PR to the system repository, wherever clusterroles are created from, if you are running as a tenant on a Flux cluster you can’t create clusterroles from there”

But I need to create jupyterhub as a tenant.
Any chance to adjust jupyterhub via helm chart values so that it does not try to create any clusterroles?


ClusterRoles are used by the user scheduler which only really makes sense on a dedicated cluster. You can turn it off by setting it to false:

Thanks! @manics
I turned off user scheduler, passed flux2-multi-tenancy helmreleases check and installed as a tenant.
But had some other issues below.
Do you have some tips or docs on running jupyterhub on k3s cluster?

ubuntu@k3s:~$ kubectl get pods -n ns-ml
NAME                                         READY   STATUS             RESTARTS   AGE
svclb-proxy-public-54gs4                     0/1     Pending            0          27m
continuous-image-puller-ndssw                1/1     Running            0          27m
proxy-b85795fd6-h9hs4                        1/1     Running            0          27m
hub-7f58567c57-kbjm7                         0/1     CrashLoopBackOff   9          27m
ubuntu@k3s:~$ kubectl get svc -n ns-ml
NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
proxy-api                   ClusterIP   <none>        8001/TCP            27m
proxy-public                LoadBalancer   <pending>     80:30321/TCP        27m
hub                         ClusterIP    <none>        8081/TCP            27m

ubuntu@k3s:~$ k logs -n ns-ml hub-7f58567c57-kbjm7
Loading /usr/local/etc/jupyterhub/secret/values.yaml
No config at /usr/local/etc/jupyterhub/existing-secret/values.yaml
[I 2021-10-06 11:08:50.096 JupyterHub app:2459] Running JupyterHub version 1.4.2
[I 2021-10-06 11:08:50.096 JupyterHub app:2489] Using Authenticator: jupyterhub.auth.DummyAuthenticator-1.4.2
[I 2021-10-06 11:08:50.097 JupyterHub app:2489] Using Spawner: kubespawner.spawner.KubeSpawner-1.1.0
[I 2021-10-06 11:08:50.097 JupyterHub app:2489] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-1.4.2
[E 2021-10-06 11:08:50.109 JupyterHub app:2969]
    Traceback (most recent call last):
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/", line 2966, in launch_instance_async
        await self.initialize(argv)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/", line 2501, in initialize
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/", line 1703, in init_db
        dbutil.upgrade_if_needed(self.db_url, log=self.log)
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/", line 112, in upgrade_if_needed
      File "/usr/local/lib/python3.8/dist-packages/jupyterhub/", line 771, in check_db_revision
        current_table_names = set(inspect(engine).get_table_names())
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/", line 64, in inspect
        ret = reg(subject)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 182, in _engine_insp
        return Inspector._construct(Inspector._init_engine, bind)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 117, in _construct
        init(self, bind)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 128, in _init_engine
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 3165, in connect
        return self._connection_cls(self, close_with_result=close_with_result)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 96, in __init__
        else engine.raw_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 3244, in raw_connection
        return self._wrap_pool_connect(self.pool.connect, _connection)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 3214, in _wrap_pool_connect
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 2068, in _handle_dbapi_exception_noconnection
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/", line 207, in raise_
        raise exception
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 3211, in _wrap_pool_connect
        return fn()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 307, in connect
        return _ConnectionFairy._checkout(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 767, in _checkout
        fairy = _ConnectionRecord.checkout(pool)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 425, in checkout
        rec = pool._do_get()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 256, in _do_get
        return self._create_connection()
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 253, in _create_connection
        return _ConnectionRecord(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 368, in __init__
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 611, in __connect
        pool.logger.debug("Error on connect(): %s", e)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/", line 70, in __exit__
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/util/", line 207, in raise_
        raise exception
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/pool/", line 605, in __connect
        connection = pool._invoke_creator(self)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 578, in connect
        return dialect.connect(*cargs, **cparams)
      File "/usr/local/lib/python3.8/dist-packages/sqlalchemy/engine/", line 584, in connect
        return self.dbapi.connect(*cargs, **cparams)
    sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file
    (Background on this error at:

ubuntu@k3s:~$ k describe pod -n ns-ml             svclb-proxy-public-54gs4
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  16m   default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.
  Warning  FailedScheduling  16m   default-scheduler  0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports.

K3S works fine with Z2JH, it’s what we use for CI testing :smiley:

It looks like you’ve got poblems with your storage, most likely the permissions on the volume are incorrect so the hub can’t write to it.

A recent version of K3S had a bug: Cannot write data to local PVC · Issue #3704 · k3s-io/k3s · GitHub
maybe you’re hitting that?

Thanks a lot! @manics Just updated K3S from v1.21.3+k3s1 to v1.21.4+k3s1, as recommended by dereknola from the post, pod hub runs well.

Glad to know that K3S is in use while CI testing, do you have a guide on running jupyterhub on K3S cluster? say for svclb ports conflict, proxy-public LoadBalancer pending, traefik config

There’s no specific guide as it’s just another Kubernetes cluster.

The easiest way to get it running is to use ClusterIPs and an ingress instead of a load-balancer. K3s should have an ingress controller already.