Jupyterhub Persistant Volume resets on every pod restart

We have hosted Jupyterhub on our AKS cluster using the helmchart from the repo: jupyterhub/zero-to-jupyterhub-k8s: Helm Chart & Documentation for deploying JupyterHub on Kubernetes (github.com).

The storage for the singleuser pod resets everytime the pod restarts. The configurations for the single user pod is as below:

storage:
    type: dynamic 
    extraVolumes:
      - name: shared-vol
        persistentVolumeClaim:
            claimName: pvc-blob 
    extraVolumeMounts:
      - name: shared-vol
        mountPath: /home/shared
    static:
      pvcName:
      subPath: "{username}"
    capacity: 10Gi
    homeMountPath: /home/{username}
    dynamic:
      storageClass: 
      pvcNameTemplate: claim-{username}{servername}
      volumeNameTemplate: volume-{username}{servername}
      storageAccessModes: [ReadWriteOnce]

We are using a default storageclass as shown below. The PV and PVCs are retained, but the data is reset everytime on pod restart.

Name:                  default
IsDefaultClass:        Yes
Annotations:           storageclass.kubernetes.io/is-default-class=true
Provisioner:           disk.csi.azure.com
Parameters:            skuname=StandardSSD_LRS
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

We also tried custom storage classes, with Azure files, and with Reclaim policy as Retain, but in vain.

Hi! What do you mean by

? Can you be more specific?

As we try to download some git repos/any .ipynb files in the default /work directory (shown in the leftside bar) - it will not be present once the pod is restarted.

I’m afraid it’s still not clear to me what you mean. Are you able to share your full configuration and Dockerfile?

Hi @manics -

Pardon me for that.

Currently we are hosting jupyterhub on AKS using the helm charts from the git repo here.

To re-explain the issue -
As part of the configuration of ‘singleuser’, used by Kubespawner for spawning user pods, in the helm chart, we needed to use a persistent volume to facilitate users need to store their data, while accessing Jupyterhub notebooks. This is configured as below:

singleuser:
    networkTools:
      image:
        name: jupyterhub/k8s-network-tools
        tag: "2.0.0"
        pullPolicy:
        pullSecrets: []
      resources: {}
    extraEnv:
      CHOWN_HOME: 'yes'
    uid: 0 #enabled for testing blobfuse
    fsGid: 100
    storage:
      type: dynamic
      extraVolumes:
        - name: shared-vol
          persistentVolumeClaim:
              claimName: pvc-blob 
      extraVolumeMounts:
        - name: shared-vol
          mountPath: /home/shared
      capacity: 10Gi
      homeMountPath: /home/{username} #test this
      dynamic:
        storageClass: jupyter-azurefile
        pvcNameTemplate: claim-{username}{servername}
        volumeNameTemplate: volume-{username}{servername}
        storageAccessModes: [ReadWriteOnce]
    image:
      name: <imagename>.azurecr.io/jupyter/minimal-notebook
      tag: "v2"
      pullPolicy:
      pullSecrets: 
       - <secret-name>

If you see the above configuration, we have used singleuser.storage.type as dynamic and it should create a persistent volume, and claim using the custom storageclass mentioned.
As per our understanding this PV should allow Jupyterhub users to store their data in the mountpath mentioned (home/(username)).

However when the user accesses Jupyterhub UI and stores the data in the path - (home/(username)), it will remain persistent only untill the user pod is alive. When the user pod is timedout/deleted and a new session is initiated/pod is created when the user logs in again, he is not able to find the data he stored in the above persisted path. Please refer to the screengrab:

Hi @consideRatio - Could you please help on the above case here?

There are different parts to diagnose, worth doing one at the time.

  1. Does the Helm chart configuration, which in turn configured KubeSpawner, lead to the creation of a PVC as part of starting the user server?

    Its important to note that if a PVC is around already, kubespawner will not update it or similar. If you want to get an update, do it manually or delete it so kubespawner can create it from scratch.

    To test this, use kubectl get pvc to list PVC resources. They may also lead to the creation of a PV if referencing a k8s StorageClass, but it depends on the storageclass.

  2. Does the k8s Pod started for the user correctly mount created PVC?

    Use kubectl get pod jupyter-myuser -o yaml and inspect the pod’s volumes and the main container’s volumesMounts.

  3. If all of the above checks out, you should have a user pod with a volume mounted somewhere in the filesystem, and making changes to that should mean to make changes to the mounted storage.

    If something is still wrong, my main suspicions are:

    1. Using singleuser.storage.homeMountPath is where the user storage is mounted perhaps, but is that the home folder for the user starting up? Maybe not.
    2. Using an old PVC, created before with some other configuration, and now expecting it to be of a new kind.
    3. That the image used has some startup script changing the user, relocating the home folder, or doing something unexpected on startup. For example singleuser.extraEnv CHOWN_HOME - I think that option is fine, but maybe some configuration to switch to some other user etc is causing issues.

Anyhow, if you can confirm the first main two points above, then I think you can rule out its an issue related to the Helm chart or kubespawner and focus on general issues you could reproduce with self-created pods running the busybox image for example.

Hi @consideRatio

Thanks for the inputs and sorry for the delayed response.

Based on your suggestion we understood that the volume is mounted on the path: /home/jovyan and it is getting persisted as it should be. So no issues with that.

However the default working directory that the users get when a terminal is opened is /home/{username}, which does not persist any data, even if the singleuser.storage.homeMountPath is set to /home/{username}.

So we would need to make the default working directory (the one which users see on starting a terminal) as /home/jovyan - as it currently persists the data. Any idea where we can set this - We already tried setting singleuser.storage.homeMountPath to /home/jovyan, but doesn’t work.

So we would need to make the default working directory (the one which users see on starting a terminal) as /home/jovyan - as it currently persists the data. Any idea where we can set this

You’ll have to configure that inside your Docker image, e.g. by passing the username as an environment variable and running a script inside your image. For example, jupyter/docker-stacks has a script built-in:
https://jupyter-docker-stacks.readthedocs.io/en/latest/using/common.html

If you share your full configuration it’ll be easier for us to help you.

Hi @manics
Here is the complete Helm Chart Configuration for your reference:

# imagePullSecrets is configuration to reference the k8s Secret resources the
# Helm chart's pods can get credentials from to pull their images.
imagePullSecrets: 
 - jhub-secret-acr-sp

# hub relates to the hub pod, responsible for running JupyterHub, its configured
# Authenticator class KubeSpawner, and its configured Proxy class
# ConfigurableHTTPProxy. KubeSpawner creates the user pods, and
# ConfigurableHTTPProxy speaks with the actual ConfigurableHTTPProxy server in
# the proxy pod.
hub:
  revisionHistoryLimit:
  config:
    AzureAdOAuthenticator:
      username_claim: unique_name
      enable_auth_state: true
      client_id: <REDACTED>
      client_secret: <REDACTED>
      oauth_callback_url: <REDACTED>
      tenant_id: <REDACTED>
      scope:
        - openid
        - profile
        - email
    JupyterHub:
      admin_access: true
      authenticator_class: azuread  
  service:
    type: ClusterIP
    annotations: {}
    ports:
      nodePort:
    extraPorts: []
    loadBalancerIP:
  baseUrl: /
  cookieSecret:
  initContainers: []
  nodeSelector: {}
  #  kubernetes.azure.com/agentpool: defaultpool
  tolerations: []
  concurrentSpawnLimit: 64
  consecutiveFailureLimit: 5
  activeServerLimit:
  deploymentStrategy:
    ## type: Recreate
    ## - sqlite-pvc backed hubs require the Recreate deployment strategy as a
    ##   typical PVC storage can only be bound to one pod at the time.
    ## - JupyterHub isn't designed to support being run in parallell. More work
    ##   needs to be done in JupyterHub itself for a fully highly available (HA)
    ##   deployment of JupyterHub on k8s is to be possible.
    type: Recreate
  db:
    type: sqlite-pvc
    upgrade:
    pvc:
      annotations: {}
      selector: {}
      accessModes:
        - ReadWriteOnce
      storage: 1Gi
      subPath:
      storageClassName:
    url:
    password:
  labels: {}
  annotations: {}
  command: []
  args: []
  #extraConfig: {}
  extraConfig:
    CustomSpawner: | 
      c.Spawner.cmd = ['start.sh','jupyterhub-singleuser','--allow-root']
      c.KubeSpawner.args = ['--allow-root']
      def notebook_dir_hook(spawner):
        spawner.environment = {'NB_USER':spawner.user.name,'NB_UID':'1000'}
      c.Spawner.pre_spawn_hook = notebook_dir_hook
  extraFiles: {}
  extraEnv: {}
  extraContainers: []
  extraVolumes: []
  extraVolumeMounts: []
  image:
    name: jupyterhub/k8s-hub
    tag: "2.0.0"
    pullPolicy:
    pullSecrets: []
  resources: {}
  podSecurityContext:
    fsGroup: 1000
  containerSecurityContext:
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
  lifecycle: {}
  loadRoles: {}
  services: {}
  pdb:
    enabled: false
    maxUnavailable:
    minAvailable: 1
  networkPolicy:
    enabled: true
    ingress: []
    egress: []
    egressAllowRules:
      cloudMetadataServer: true
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: true
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  allowNamedServers: false
  namedServerLimitPerUser:
  authenticatePrometheus:
  redirectToServer:
  shutdownOnLogout:
  templatePaths: []
  templateVars: {}
  livenessProbe:
    # The livenessProbe's aim to give JupyterHub sufficient time to startup but
    # be able to restart if it becomes unresponsive for ~5 min.
    enabled: true
    initialDelaySeconds: 300
    periodSeconds: 10
    failureThreshold: 30
    timeoutSeconds: 3
  readinessProbe:
    # The readinessProbe's aim is to provide a successful startup indication,
    # but following that never become unready before its livenessProbe fail and
    # restarts it if needed. To become unready following startup serves no
    # purpose as there are no other pod to fallback to in our non-HA deployment.
    enabled: true
    initialDelaySeconds: 0
    periodSeconds: 2
    failureThreshold: 1000
    timeoutSeconds: 1
  existingSecret:
  serviceAccount:
    create: true
    name:
    annotations: {}
  extraPodSpec: {}

rbac:
  create: true

# proxy relates to the proxy pod, the proxy-public service, and the autohttps
# pod and proxy-http service.
proxy:
  secretToken:
  annotations: {}
  deploymentStrategy:
    ## type: Recreate
    ## - JupyterHub's interaction with the CHP proxy becomes a lot more robust
    ##   with this configuration. To understand this, consider that JupyterHub
    ##   during startup will interact a lot with the k8s service to reach a
    ##   ready proxy pod. If the hub pod during a helm upgrade is restarting
    ##   directly while the proxy pod is making a rolling upgrade, the hub pod
    ##   could end up running a sequence of interactions with the old proxy pod
    ##   and finishing up the sequence of interactions with the new proxy pod.
    ##   As CHP proxy pods carry individual state this is very error prone. One
    ##   outcome when not using Recreate as a strategy has been that user pods
    ##   have been deleted by the hub pod because it considered them unreachable
    ##   as it only configured the old proxy pod but not the new before trying
    ##   to reach them.
    type: Recreate
    ## rollingUpdate:
    ## - WARNING:
    ##   This is required to be set explicitly blank! Without it being
    ##   explicitly blank, k8s will let eventual old values under rollingUpdate
    ##   remain and then the Deployment becomes invalid and a helm upgrade would
    ##   fail with an error like this:
    ##
    ##     UPGRADE FAILED
    ##     Error: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    ##     Error: UPGRADE FAILED: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    rollingUpdate:
  # service relates to the proxy-public service
  service:
    type: ClusterIP
    labels: {}
    annotations: {}
    nodePorts:
      http:
      https:
    disableHttpPort: false
    extraPorts: []
    loadBalancerIP:
    loadBalancerSourceRanges: []
  # chp relates to the proxy pod, which is responsible for routing traffic based
  # on dynamic configuration sent from JupyterHub to CHP's REST API.
  chp:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: jupyterhub/configurable-http-proxy
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      tag: "4.5.3" # https://github.com/jupyterhub/configurable-http-proxy/releases
      pullPolicy:
      pullSecrets: []
    extraCommandLineFlags: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 60
      periodSeconds: 10
      failureThreshold: 30
      timeoutSeconds: 3
    readinessProbe:
      enabled: true
      initialDelaySeconds: 0
      periodSeconds: 2
      failureThreshold: 1000
      timeoutSeconds: 1
    resources: {}
    defaultTarget:
    errorTarget:
    extraEnv: {}
    nodeSelector: {}
    #  kubernetes.azure.com/agentpool: defaultpool
    tolerations: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: true
        dnsPortsPrivateIPs: true
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    extraPodSpec: {}
  # traefik relates to the autohttps pod, which is responsible for TLS
  # termination when proxy.https.type=letsencrypt.
  traefik:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: traefik
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      tag: "v2.8.4" # ref: https://hub.docker.com/_/traefik?tab=tags
      pullPolicy:
      pullSecrets: []
    hsts:
      includeSubdomains: false
      preload: false
      maxAge: 15724800 # About 6 months
    resources: {}
    labels: {}
    extraInitContainers: []
    extraEnv: {}
    extraVolumes: []
    extraVolumeMounts: []
    extraStaticConfig: {}
    extraDynamicConfig: {}
    nodeSelector: {}
    #  kubernetes.azure.com/agentpool: defaultpool
    tolerations: []
    extraPorts: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: true
        dnsPortsPrivateIPs: true
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  secretSync:
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: jupyterhub/k8s-secret-sync
      tag: "2.0.0"
      pullPolicy:
      pullSecrets: []
    resources: {}
  labels: {}
  https:
    enabled: false
    type: letsencrypt
    #type: letsencrypt, manual, offload, secret
    letsencrypt:
      contactEmail:
      # Specify custom server here (https://acme-staging-v02.api.letsencrypt.org/directory) to hit staging LE
      acmeServer: https://acme-v02.api.letsencrypt.org/directory
    manual:
      key:
      cert:
    secret:
      name:
      key: tls.key
      crt: tls.crt
    hosts: []

# singleuser relates to the configuration of KubeSpawner which runs in the hub
# pod, and its spawning of user pods such as jupyter-myusername.
singleuser:
  podNameTemplate:
  extraTolerations: []
  nodeSelector: {}
  #  kubernetes.azure.com/agentpool: defaultpool
  extraNodeAffinity:
    required: []
    preferred: []
  extraPodAffinity:
    required: []
    preferred: []
  extraPodAntiAffinity:
    required: []
    preferred: []
  networkTools:
    image:
      name: jupyterhub/k8s-network-tools
      tag: "2.0.0"
      pullPolicy:
      pullSecrets: []
    resources: {}
  cloudMetadata:
    # block set to true will append a privileged initContainer using the
    # iptables to block the sensitive metadata server at the provided ip.
    blockWithIptables: true
    ip: 169.254.169.254
  networkPolicy:
    enabled: true
    ingress: []
    egress: []
    egressAllowRules:
      cloudMetadataServer: false
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: true #false - to test connectivity to Azure Batch JKey Vault
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  events: true
  extraAnnotations: {}
  extraLabels:
    hub.jupyter.org/network-access-hub: "true"
  extraFiles: {}
  #extraEnv: {}
  extraEnv:
    CHOWN_HOME: 'yes'
  lifecycleHooks: {}
  initContainers: []
  extraContainers: []
  allowPrivilegeEscalation: false
  #uid: 1000
  uid: 0 #enabled for testing blobfuse
  fsGid: 100
  serviceAccountName:
  storage:
    type: dynamic 
    extraLabels: {}
    # extraVolumes: []
    extraVolumes:
      - name: shared-vol
        persistentVolumeClaim:
            claimName: pvc-blob 
    # extraVolumeMounts: [] #enabled for testing blobfuse
    extraVolumeMounts:
      - name: shared-vol
        mountPath: /home/shared
    static:
      pvcName: pvc-jhub-static-azuredisk
      subPath: "{username}"
    capacity: 10Gi
    homeMountPath: /home/jovyan #test this
    dynamic:
      storageClass: #jupyter-azurefile
      pvcNameTemplate: claim-{username}{servername}
      volumeNameTemplate: volume-{username}{servername}
      storageAccessModes: [ReadWriteOnce]
  image:
    name: <REDACTED>.azurecr.io/jupyter/minimal-notebook
    tag: "vUbuntu2004"
    pullPolicy:
    pullSecrets: 
     - jhub-secret-acr-sp
  startTimeout: 300
  cpu:
    limit:
    guarantee:
  memory:
    limit:
    guarantee: 1G
  extraResource:
    limits: {}
    guarantees: {}
  cmd: jupyterhub-singleuser
  defaultUrl:
  extraPodConfig: {}
  #profileList: []
  profileList:
    - display_name: "DE Environment(default)"
      description: "Tools Stack: Python, Azure CLI, SQLAlchemy(Dremio), pandas, numpy, plotly, adlfs "
      default: true
    - display_name: "DE ML Pipeline environment"
      description: "Environment to run the Studio ML Pipeline"
      kubespawner_override:
        image: <REDACTED>.azurecr.io/jupyter/minimal-notebook:vML1
        image_pull_secrets: "jhub-secret-acr-sp"
    - display_name: "Spark environment"
      description: "The Jupyter Stacks spark image!"
      kubespawner_override:
        image: jupyter/all-spark-notebook:2343e33dec46
    - display_name: "Learning Data Science"
      description: "Datascience Environment with Sample Notebooks"
      kubespawner_override:
        image: jupyter/datascience-notebook:2343e33dec46
        lifecycle_hooks:
          postStart:
            exec:
              command:
                - "sh"
                - "-c"
                - >
                  gitpuller https://github.com/data-8/materials-fa17 master materials-fa;

# scheduling relates to the user-scheduler pods and user-placeholder pods.
scheduling:
  userScheduler:
    enabled: true
    revisionHistoryLimit:
    replicas: 2
    logLevel: 4
    # plugins are configured on the user-scheduler to make us score how we
    # schedule user pods in a way to help us schedule on the most busy node. By
    # doing this, we help scale down more effectively. It isn't obvious how to
    # enable/disable scoring plugins, and configure them, to accomplish this.
    #
    # plugins ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins-1
    # migration ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduler-configuration-migrations
    #
    plugins:
      score:
        # These scoring plugins are enabled by default according to
        # https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins
        # 2022-02-22.
        #
        # Enabled with high priority:
        # - NodeAffinity
        # - InterPodAffinity
        # - NodeResourcesFit
        # - ImageLocality
        # Remains enabled with low default priority:
        # - TaintToleration
        # - PodTopologySpread
        # - VolumeBinding
        # Disabled for scoring:
        # - NodeResourcesBalancedAllocation
        #
        disabled:
          # We disable these plugins (with regards to scoring) to not interfere
          # or complicate our use of NodeResourcesFit.
          - name: NodeResourcesBalancedAllocation
          # Disable plugins to be allowed to enable them again with a different
          # weight and avoid an error.
          - name: NodeAffinity
          - name: InterPodAffinity
          - name: NodeResourcesFit
          - name: ImageLocality
        enabled:
          - name: NodeAffinity
            weight: 14631
          - name: InterPodAffinity
            weight: 1331
          - name: NodeResourcesFit
            weight: 121
          - name: ImageLocality
            weight: 11
    pluginConfig:
      # Here we declare that we should optimize pods to fit based on a
      # MostAllocated strategy instead of the default LeastAllocated.
      - name: NodeResourcesFit
        args:
          scoringStrategy:
            resources:
              - name: cpu
                weight: 1
              - name: memory
                weight: 1
            type: MostAllocated
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      # IMPORTANT: Bumping the minor version of this binary should go hand in
      #            hand with an inspection of the user-scheduelrs RBAC resources
      #            that we have forked in
      #            templates/scheduling/user-scheduler/rbac.yaml.
      #
      #            Debugging advice:
      #
      #            - Is configuration of kube-scheduler broken in
      #              templates/scheduling/user-scheduler/configmap.yaml?
      #
      #            - Is the kube-scheduler binary's compatibility to work
      #              against a k8s api-server that is too new or too old?
      #
      #            - You can update the GitHub workflow that runs tests to
      #              include "deploy/user-scheduler" in the k8s namespace report
      #              and reduce the user-scheduler deployments replicas to 1 in
      #              dev-config.yaml to get relevant logs from the user-scheduler
      #              pods. Inspect the "Kubernetes namespace report" action!
      #
      #            - Typical failures are that kube-scheduler fails to search for
      #              resources via its "informers", and won't start trying to
      #              schedule pods before they succeed which may require
      #              additional RBAC permissions or that the k8s api-server is
      #              aware of the resources.
      #
      #            - If "successfully acquired lease" can be seen in the logs, it
      #              is a good sign kube-scheduler is ready to schedule pods.
      #
      name: k8s.gcr.io/kube-scheduler
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow. The minor version is pinned in the
      # workflow, and should be updated there if a minor version bump is done
      # here.
      #
      tag: "v1.23.10" # ref: https://github.com/kubernetes/website/blob/4e6eceff71da0a1afd24fd9ce78b8e278b22f1bf/content/en/releases/patch-releases.md
      pullPolicy:
      pullSecrets: []
    nodeSelector: {}
    #  kubernetes.azure.com/agentpool: defaultpool
    tolerations: []
    labels: {}
    annotations: {}
    pdb:
      enabled: true
      maxUnavailable: 1
      minAvailable:
    resources: {}
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  podPriority:
    enabled: false
    globalDefault: false
    defaultPriority: 0
    imagePullerPriority: -5
    userPlaceholderPriority: -10
  userPlaceholder:
    enabled: true
    image:
      name: k8s.gcr.io/pause
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      # If you update this, also update prePuller.pause.image.tag
      #
      tag: "3.8"
      pullPolicy:
      pullSecrets: []
    revisionHistoryLimit:
    replicas: 0
    labels: {}
    annotations: {}
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    resources: {}
  corePods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: core
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: core
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer
  userPods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: user
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: user
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer

# prePuller relates to the hook|continuous-image-puller DaemonsSets
prePuller:
  revisionHistoryLimit:
  labels: {}
  annotations: {}
  resources: {}
  containerSecurityContext:
    runAsUser: 65534 # nobody user
    runAsGroup: 65534 # nobody group
    allowPrivilegeEscalation: false
  extraTolerations: []
  # hook relates to the hook-image-awaiter Job and hook-image-puller DaemonSet
  hook:
    enabled: true
    pullOnlyOnChanges: true
    # image and the configuration below relates to the hook-image-awaiter Job
    image:
      name: jupyterhub/k8s-image-awaiter
      tag: "2.0.0"
      pullPolicy:
      pullSecrets: []
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    podSchedulingWaitDuration: 10
    nodeSelector: {}
    #  kubernetes.azure.com/agentpool: defaultpool
    tolerations: []
    resources: {}
    serviceAccount:
      create: true
      name:
      annotations: {}
  continuous:
    enabled: true
  pullProfileListImages: true
  extraImages: {}
  pause:
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: k8s.gcr.io/pause
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      # If you update this, also update scheduling.userPlaceholder.image.tag
      #
      tag: "3.8"
      pullPolicy:
      pullSecrets: []

ingress:
  enabled: false
  annotations: {}
  ingressClassName:
  hosts: []
  pathSuffix:
  pathType: Prefix
  tls: []

# cull relates to the jupyterhub-idle-culler service, responsible for evicting
# inactive singleuser pods.
#
# The configuration below, except for enabled, corresponds to command-line flags
# for jupyterhub-idle-culler as documented here:
# https://github.com/jupyterhub/jupyterhub-idle-culler#as-a-standalone-script
#
cull:
  enabled: true
  users: false # --cull-users
  adminUsers: true # --cull-admin-users
  removeNamedServers: false # --remove-named-servers
  timeout: 3600 # --timeout
  every: 600 # --cull-every
  concurrency: 10 # --concurrency
  maxAge: 0 # --max-age

debug:
  enabled: true

global:
  safeToShowValues: false