Storage capacity is not respected by hub

Hi, I’ve deployed jupyterhub zero on custom k8s cluster with ceph-rbd storage class. I set

singleuser:
    storage:
      capacity: 2Gi

but, pvc is always 10Gi. I saw this, but in my case other apps can claim less or more. What should I do?

Just checking, were these were for new users who hadn’t previously logged in, or existing users whose volumes were deleted? If it’s the latter, could it be that a PersistentVolume was re-used for a new PersistentVolumeClaim instead of a new one being created?

Can you show us the output of kubectl describe pv,pvc?

It might also be helpful to see your full Z2JH config with secrets redacted.

Even if I deleted pv,pvc and deleted image from ceph, it still claims for 10Gi.
config:

prePuller:
  hook:
    enabled: false
proxy:
  secretToken: 
hub:
  extraEnv:
    OAUTH2_AUTHORIZE_URL:
    OAUTH2_TOKEN_URL: 
    OAUTH_CALLBACK_URL: 
    JUPYTERHUB_CRYPT_KEY: 
  extraConfig:
    authHook: |
      c.Authenticator.auto_login = True
      c.Authenticator.enable_auth_state = True
      def userdata_hook(spawner, auth_state):
        token = auth_state["access_token"]
        import requests
        # inside cluster domain
        s3_api = 'http://s3-api-svc.s3-api-flask.svc.cluster.local/bucket'
        headers = {
            "authorization": f"Bearer {token}",
            "content-type": "application/json"
        }
        res = requests.get(s3_api, headers=headers)
        spawner.environment['TOKEN'] = auth_state["access_token"]
        spawner.environment['S3DIR'] = res.json()['bucket_name']
      c.Spawner.auth_state_hook = userdata_hook
    fuseConfig: |
      from kubernetes import client
      def modify_pod_hook(spawner, pod):
        pod.spec.containers[0].security_context = client.V1SecurityContext(
          privileged=True,
          capabilities=client.V1Capabilities(
              add=['SYS_ADMIN']
          )
        )
        return pod
      c.KubeSpawner.modify_pod_hook = modify_pod_hook
auth:
  type: custom
  custom:
    className: oauthenticator.generic.GenericOAuthenticator
    config:
      login_service: "keycloak"
      client_id: 
      client_secret: 
      refresh_pre_spawn: True
      logout_redirect_url: 
      token_url: 
      userdata_url: 
      userdata_method: GET
      userdata_params: {'state': 'state'}
      username_key: preferred_username

singleuser:
  defaultUrl: "/lab"
  cpu:
    limit: 4
    guarantee: 2
  memory:
    limit: 5G
    guarantee: 1G
  storage:
    capacity: 2Gi
    extraVolumes:
      - name: fuse
        hostPath:
          - path: /dev/fuse
    extraVolumeMounts:
      - name: fuse
        mountPath: /dev/fuse
  uid: 0
  gid: 0
  lifecycleHooks:
    postStart:
      exec:
        command: ["init_s3.sh"]
    preStop:
      exec:
        command: ["umount", "/home/jovyan/s3"]
  extraEnv:
    CHOWN_HOME: 'yes'
storage:
    homeMountPath: /home/jovyan
  image:
    name: jupyter/minimal-notebook
    tag: 2343e33dec46
  profileList:
    - display_name: "s3 test environment"
      default: True
      description: "Testing connection with s3 and buckets"
      kubespawner_override:
        image: umberto10/jhub-notebook-s3

pvc:

Namespace:     jhub
StorageClass:  ceph-rbd
Status:        Bound
Volume:        pvc-c78c0ccd-aa73-4f90-a210-1ea497b5b456
Labels:        app=jupyterhub
               chart=jupyterhub-0.9.1
               component=singleuser-storage
               heritage=jupyterhub
               release=jhub
Annotations:   hub.jupyter.org/username: jupyter1
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>


Name:          claim-test2
Namespace:     jhub
StorageClass:  ceph-rbd
Status:        Bound
Volume:        pvc-c066c35a-a397-408f-af85-855f51f0e3cd
Labels:        app=jupyterhub
               chart=jupyterhub-0.9.1
               component=singleuser-storage
               heritage=jupyterhub
               release=jhub
Annotations:   hub.jupyter.org/username: test2
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>


Name:          claim-umb
Namespace:     jhub
StorageClass:  ceph-rbd
Status:        Bound
Volume:        pvc-3b4c3484-ed7e-45d1-82ad-48db000311da
Labels:        app=jupyterhub
               chart=jupyterhub-0.9.1
               component=singleuser-storage
               heritage=jupyterhub
               release=jhub
Annotations:   hub.jupyter.org/username: umb
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>


Name:          claim-umberto
Namespace:     jhub
StorageClass:  ceph-rbd
Status:        Bound
Volume:        pvc-80340ffa-b8ee-4edc-8ecb-1e5480e58ae0
Labels:        app=jupyterhub
               chart=jupyterhub-0.9.1
               component=singleuser-storage
               heritage=jupyterhub
               release=jhub
Annotations:   hub.jupyter.org/username: umberto
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>


Name:          hub-db-dir
Namespace:     jhub
StorageClass:  ceph-rbd
Status:        Bound
Volume:        pvc-c80ce524-05cf-4ad0-9782-745d1a6e34d7
Labels:        app=jupyterhub
               app.kubernetes.io/managed-by=Helm
               chart=jupyterhub-0.9.1
               component=hub
               heritage=Helm
               release=jhub
Annotations:   meta.helm.sh/release-name: jhub
               meta.helm.sh/release-namespace: jhub
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       hub-6f7b8886b9-r42kx
Events:        <none>

pv

Name:            pvc-3b4c3484-ed7e-45d1-82ad-48db000311da
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Bound
Claim:           jhub/claim-umb
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-c71b9785-69ed-11eb-bc63-021fce7b403c
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-576f7ef6-67be-4b83-998e-2c34ed4fdf9e
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Released
Claim:           k8s-kvdi/kvdi-jupyter1-userdata
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        500Mi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-d287f0cd-6adb-11eb-bc63-021fce7b403c
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-60239164-3f6d-4df4-b5e7-00d8f2e5e8ec
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Bound
Claim:           binder/hub-db-dir
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-801c6c3d-74f4-11eb-88e1-36f5b14170ba
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-80340ffa-b8ee-4edc-8ecb-1e5480e58ae0
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Bound
Claim:           jhub/claim-umberto
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-559a8d9d-75bd-11eb-88e1-36f5b14170ba
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-916504c7-daa8-44ca-8c7a-367d608e4fde
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Available
Claim:
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        500Mi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-120127e9-6b81-11eb-bc63-021fce7b403c
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-9409103b-a529-4b5e-996e-c1f677124d08
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Released
Claim:           k8s-kvdi/kvdi-umb-userdata
Reclaim Policy:  Retain
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        500Mi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-9393f10f-6adb-11eb-bc63-021fce7b403c
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-c066c35a-a397-408f-af85-855f51f0e3cd
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Bound
Claim:           jhub/claim-test2
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-97ae7289-7164-11eb-b830-6674848c635f
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-c78c0ccd-aa73-4f90-a210-1ea497b5b456
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Bound
Claim:           jhub/claim-jupyter1
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-135461ad-6adc-11eb-bc63-021fce7b403c
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>


Name:            pvc-c80ce524-05cf-4ad0-9782-745d1a6e34d7
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: ceph.com/rbd
                 rbdProvisionerIdentity: ceph.com/rbd
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    ceph-rbd
Status:          Bound
Claim:           jhub/hub-db-dir
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:
Source:
    Type:          RBD (a Rados Block Device mount on the host that shares a pod's lifetime)
    CephMonitors:  [10.26.41.11:6789]
    RBDImage:      kubernetes-dynamic-pvc-128e7dfc-72bf-11eb-8240-36f5b14170ba
    FSType:
    RBDPool:       cloudstack_primary
    RadosUser:     k8s
    Keyring:       /etc/ceph/keyring
    SecretRef:     &SecretReference{Name:ceph-admin-secret,Namespace:kube-system,}
    ReadOnly:      false
Events:            <none>

Can you double check your configuration? I can’t tell if there’s a copy and paste error since the indentation looks wrong, but you might have two storage properties under singleuser. Since it’s a dictionary only one of them will take effect.

Thanks! That was it, duplicated storage section…
Thanks for your help!

1 Like