Run the single user pod with ads uid & gid

Hi ,

How to configure Jupyterhub single user pod to run with ads uid & gid?

Thanks

Does ads mean Active Directory? If so have a look at

That example is for LDAP, but you should be able to modify it to work with AD. In addition to the overridden authenticator you need to use a jupyter/docker-stacks image which can switch UID and GID at startup.

Thanks @manics
Yes ads means Active Directory. Is there any pre-built image with the above configuration?
My environment is little complicated, where i can’t build the images directly.

The JupyterHub Helm chart contains all required dependencies, and everything else should be configurable using the configuration files, including extending the authenticator.

For other platforms you’ll need to manage the dependencies yourself.

@manics I’m stuck at this phase when building a docker image with the below dockerfile.

Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working...

Those singleuser images are pre built and available on quay.io:
Running a Container — Docker Stacks documentation

It’s the hub image that you’ll need to customise unless you’re using the Z2JH Helm chart.

Hi @manics

I’m using default Z2JH helm chart. I didn’t make any changes to the script.
I made changes to the authenticator and i can see the UID & GID in the logs. But not able to login to the jupyterhub.

[D 2025-06-18 16:13:29.532 JupyterHub ldapauthenticator:532] Attempting to bind user-account
[D 2025-06-18 16:13:30.238 JupyterHub ldapauthenticator:559] Successfully bound user-account
[D 2025-06-18 16:13:30.238 JupyterHub ldapauthenticator:446] Looking up user with:
        search_base = 'DC=ads,DC=abc,DC=com'
        search_filter = '(sAMAccountName=userid)'
        attributes = '[cn]'
[D 2025-06-18 16:13:30.278 JupyterHub ldapauthenticator:532] Attempting to bind CN=username,OU=abc,OU=People,DC=ads,DC=abc,DC=com
[D 2025-06-18 16:13:31.073 JupyterHub ldapauthenticator:559] Successfully bound CN=username,OU=abc,OU=People,DC=ads,DC=abc,DC=com
[D 2025-06-18 16:13:31.112 JupyterHub ldapauthenticator:701] username:username attributes:{'uidNumber': [00000], 'gidNumber': [00000], 'uid': ['userid']}
[W 2025-06-18 16:13:31.113 JupyterHub metrics:456] Event loop was unresponsive for at least 1.54s!
[W 2025-06-18 16:13:31.113 JupyterHub auth:739] User 'userid' not allowed.
[D 2025-06-18 16:13:31.113 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.50ms
[W 2025-06-18 16:13:31.113 JupyterHub base:979] Failed login for rgadh1
[W 2025-06-18 16:13:31.114 JupyterHub log:192] 403 POST /hub/login?next=%2Fhub%2F (@::ffff:) 1585.39ms
[D 2025-06-18 16:13:33.094 JupyterHub log:192] 200 GET /hub/health (@) 0.40ms
[D 2025-06-18 16:13:35.094 JupyterHub log:192] 200 GET /hub/health (@) 0.55ms
[D 2025-06-18 16:13:35.094 JupyterHub log:192] 200 GET /hub/health (@) 0.54ms

It sounds like you’re missing some configuration to allow the user.

I’m using below config.

hub:
  config:
    # JupyterHub:
    #   authenticator_class: ldapauthenticator.LDAPAuthenticator
    Authenticator:
      enable_auth_state: true
      allow_all: true
    LDAPAuthenticator:
      # See https://github.com/rroemhild/docker-test-openldap#ldap-structure
      # for users
      server_address: ldap-test-openldap
      lookup_dn: True
      bind_dn_template: "cn={username},ou=people,dc=planetexpress,dc=com"
      user_search_base: "ou=people,dc=planetexpress,dc=com"
      user_attribute: uid
      lookup_dn_user_dn_attribute: cn
      escape_userdn: True
      auth_state_attributes: ["uid", "cn", "mail", "ou"]
      use_lookup_dn_username: False
  extraConfig:
    configClass: |
      c.JupyterHub.authenticator_class = LDAPAuthenticatorExtend
    extendedLDAP: |
      from tornado import gen
      from ldapauthenticator import LDAPAuthenticator
      class LDAPAuthenticatorExtend(LDAPAuthenticator):
        @gen.coroutine
        def pre_spawn_start(self, user, spawner):
          self.log.debug('running preSpawn hook')
          auth_state = yield spawner.user.get_auth_state()
          self.log.debug('pre_spawn_start auth_state:%s' % auth_state)
          spawner.environment["NB_UID"] = str(auth_state["uidNumber"][0])
          spawner.environment["NB_GID"] = str(auth_state["gidNumber"][0])
          spawner.environment["NB_USER"] = str(auth_state["uid"][0])
          c.KubeSpawner.uid = str(auth_state["uidNumber"][0])

    logging: |
      c.JupyterHub.log_level = 'DEBUG'
      c.KubeSpawner.debug = True
      c.LocalProcessSpawner.debug = True
singleuser:
  uid: 0
  extraEnv:
    GRANT_SUDO: "yes"
    NOTEBOOK_ARGS: "--allow-root"

Is that your actual configuration, or an example? It doesn’t match your logs.

Assuming you’ve anonymised your config, I think you’ll need to talk to your Active Directory administrator to check the settings are correct.

Yes..It is the actual configuration. I have anonymized few configuration details.
If set allow_all: true, i was able to login. But the Pod is still running with default Jovyan user.

hub:
  config:
    # JupyterHub:
    #   authenticator_class: ldapauthenticator.LDAPAuthenticator
    Authenticator:
      enable_auth_state: true
      allow_all: true
    LDAPAuthenticator:
      # See https://github.com/rroemhild/docker-test-openldap#ldap-structure
      # for users
      server_address: ldap-test-openldap
      lookup_dn: True
      bind_dn_template: "cn={username},ou=people,dc=planetexpress,dc=com"
      user_search_base: "ou=people,dc=planetexpress,dc=com"
      user_attribute: uid
      lookup_dn_user_dn_attribute: cn
      escape_userdn: True
     auth_state_attributes: [uidNumber,gidNumber,uid]
      use_lookup_dn_username: False

I have set cmd: to empty and uid: 0
But i see the single user pod logs as below.

WARNING: container must be started as root to change the desired user's name with NB_USER=""!
WARNING: container must be started as root to change the desired user's id with NB_UID=""!
WARNING: container must be started as root to change the desired user's group id with NB_GID="0"!

Can you share your complete Z2JH configuration, and show us the full singleuser pod logs?

hub:
  revisionHistoryLimit:
  config:
    JupyterHub:
#      admin_access: true
      authenticator_class: ldapauthenticator.LDAPAuthenticator
    debug:
      enabled: true
    Authenticator:
      allow_all: true
      enable_auth_state: true
      admin_users:
         - username
    LDAPAuthenticator:
      escape_userdn: false
      lookup_dn: true
      tls_strategy: on_connect
      lookup_dn_search_user: <Ads service account>
      lookup_dn_search_filter: ({login_attr}={login})
      lookup_dn_search_password: <password>
      lookup_dn_user_dn_attribute: cn
      server_address: adsserver.com
      server_port: 636
      user_attribute: sAMAccountName
      user_search_base: 'DC=ads,DC=abccompany,DC=com'
      group_attributes: memberOf
      auth_state_attributes: [uidNumber,gidNumber,uid]
      tls_kwargs: {
      "ca_certs_file": /srv/jupyterhub/adsldap-combined.pem,
      }
  extraConfig:
    configClass: |
      c.JupyterHub.authenticator_class = LDAPAuthenticatorExtend
      c.JupyterHub.spawner_class = KubeSpawner
    extendedLDAP: |
      from tornado import gen
      from ldapauthenticator import LDAPAuthenticator
      class LDAPAuthenticatorExtend(LDAPAuthenticator):
        allow_all: True
        @gen.coroutine
        def kubespawner_pre_spawn_start(self, user, spawner):
          self.log.debug('running preSpawn hook')
          auth_state = yield spawner.user.get_auth_state()
          self.log.debug('pre_spawn_start auth_state:%s' % auth_state)
          spawner.environment["NB_UID"] = str(auth_state["uidNumber"][0])
          spawner.environment["NB_GID"] = str(auth_state["gidNumber"][0])
          spawner.environment["NB_USER"] = str(auth_state["uid"][0])
      c.KubeSpawner.uid = str(auth_state["uidNumber"][0])
  service:
    type: ClusterIP
    annotations: {}
    ports:
      nodePort:
      appProtocol:
    extraPorts: []
    loadBalancerIP:
  baseUrl: /
  cookieSecret:
  initContainers: []
  tolerations: []
  concurrentSpawnLimit: 64
  consecutiveFailureLimit: 5
  activeServerLimit:
  deploymentStrategy:
    ## type: Recreate
    ## - sqlite-pvc backed hubs require the Recreate deployment strategy as a
    ##   typical PVC storage can only be bound to one pod at the time.
    ## - JupyterHub isn't designed to support being run in parallell. More work
    ##   needs to be done in JupyterHub itself for a fully highly available (HA)
    ##   deployment of JupyterHub on k8s is to be possible.
    type: Recreate
  db:
    type: sqlite-pvc
    upgrade:
    pvc:
      annotations: {}
      selector: {}
      accessModes:
        - ReadWriteMany
      storage: 1Gi
      subPath:
      storageClassName:
    url:
    password:
  labels: {}
  annotations: {}
  command: []
  args: []
  extraConfig: {}
  extraFiles: {}
  extraEnv: {}
  extraContainers: []
  extraVolumes: []
  extraVolumeMounts: []
  image:
#    name: quay.io/jupyterhub/k8s-hub
#    tag: "4.1.0"
    name: <private registry>/k8s-hub-test
    tag: "v1"
    pullPolicy:
    pullSecrets: []
  resources: {}
  podSecurityContext:
    runAsNonRoot: true
    fsGroup: 1000
    seccompProfile:
      type: "RuntimeDefault"
  containerSecurityContext:
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
    capabilities:
      drop: ["ALL"]
  lifecycle: {}
  loadRoles: {}
  services: {}
  pdb:
    enabled: false
    maxUnavailable:
    minAvailable: 1
  networkPolicy:
    enabled: true
    ingress: []
    egress: []
    egressAllowRules:
      cloudMetadataServer: true
      dnsPortsCloudMetadataServer: true
      dnsPortsKubeSystemNamespace: true
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: true
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  allowNamedServers: false
  namedServerLimitPerUser:
  authenticatePrometheus:
  redirectToServer:
  shutdownOnLogout:
  templatePaths: []
  templateVars: {}
  livenessProbe:
    # The livenessProbe's aim to give JupyterHub sufficient time to startup but
    # be able to restart if it becomes unresponsive for ~5 min.
    enabled: true
    initialDelaySeconds: 300
    periodSeconds: 10
    failureThreshold: 30
    timeoutSeconds: 3
  readinessProbe:
    # The readinessProbe's aim is to provide a successful startup indication,
    # but following that never become unready before its livenessProbe fail and
    # restarts it if needed. To become unready following startup serves no
    # purpose as there are no other pod to fallback to in our non-HA deployment.
    enabled: true
    initialDelaySeconds: 0
    periodSeconds: 2
    failureThreshold: 1000
    timeoutSeconds: 1
  existingSecret:
  serviceAccount:
    create: true
    name:
    annotations: {}
  extraPodSpec: {}

rbac:
  create: true

# proxy relates to the proxy pod, the proxy-public service, and the autohttps
# pod and proxy-http service.
proxy:
  secretToken:
  annotations: {}
  deploymentStrategy:
    ## type: Recreate
    ## - JupyterHub's interaction with the CHP proxy becomes a lot more robust
    ##   with this configuration. To understand this, consider that JupyterHub
    ##   during startup will interact a lot with the k8s service to reach a
    ##   ready proxy pod. If the hub pod during a helm upgrade is restarting
    ##   directly while the proxy pod is making a rolling upgrade, the hub pod
    ##   could end up running a sequence of interactions with the old proxy pod
    ##   and finishing up the sequence of interactions with the new proxy pod.
    ##   As CHP proxy pods carry individual state this is very error prone. One
    ##   outcome when not using Recreate as a strategy has been that user pods
    ##   have been deleted by the hub pod because it considered them unreachable
    ##   as it only configured the old proxy pod but not the new before trying
    ##   to reach them.
    type: Recreate
    ## rollingUpdate:
    ## - WARNING:
    ##   This is required to be set explicitly blank! Without it being
    ##   explicitly blank, k8s will let eventual old values under rollingUpdate
    ##   remain and then the Deployment becomes invalid and a helm upgrade would
    ##   fail with an error like this:
    ##
    ##     UPGRADE FAILED
    ##     Error: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    ##     Error: UPGRADE FAILED: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    rollingUpdate:
  # service relates to the proxy-public service
  service:
    type: LoadBalancer
    labels: {}
    annotations: {}
    nodePorts:
      http:
      https:
    disableHttpPort: false
    extraPorts: []
    loadBalancerIP:
    loadBalancerSourceRanges: []
  # chp relates to the proxy pod, which is responsible for routing traffic based
  # on dynamic configuration sent from JupyterHub to CHP's REST API.
  chp:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: quay.io/jupyterhub/configurable-http-proxy
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      tag: "4.6.3" # https://github.com/jupyterhub/configurable-http-proxy/tags
      pullPolicy:
      pullSecrets: []
    extraCommandLineFlags: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 60
      periodSeconds: 10
      failureThreshold: 30
      timeoutSeconds: 3
    readinessProbe:
      enabled: true
      initialDelaySeconds: 0
      periodSeconds: 2
      failureThreshold: 1000
      timeoutSeconds: 1
    resources: {}
    defaultTarget:
    errorTarget:
    extraEnv: {}
    nodeSelector: {}
    tolerations: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: true
        dnsPortsCloudMetadataServer: true
        dnsPortsKubeSystemNamespace: true
        dnsPortsPrivateIPs: true
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    extraPodSpec: {}
  # traefik relates to the autohttps pod, which is responsible for TLS
  # termination when proxy.https.type=letsencrypt.
  traefik:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: traefik
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      tag: "v3.3.5" # ref: https://hub.docker.com/_/traefik?tab=tags
      pullPolicy:
      pullSecrets: []
    hsts:
      includeSubdomains: false
      preload: false
      maxAge: 15724800 # About 6 months
    resources: {}
    labels: {}
    extraInitContainers: []
    extraEnv: {}
    extraVolumes: []
    extraVolumeMounts: []
    extraStaticConfig: {}
    extraDynamicConfig: {}
    nodeSelector: {}
    tolerations: []
    extraPorts: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: true
        dnsPortsCloudMetadataServer: true
        dnsPortsKubeSystemNamespace: true
        dnsPortsPrivateIPs: true
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  secretSync:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: quay.io/jupyterhub/k8s-secret-sync
      tag: "4.2.0"
      pullPolicy:
      pullSecrets: []
    resources: {}
  labels: {}
  https:
    enabled: false
    type: letsencrypt
    #type: letsencrypt, manual, offload, secret
    letsencrypt:
      contactEmail:
      # Specify custom server here (https://acme-staging-v02.api.letsencrypt.org/directory) to hit staging LE
      acmeServer: https://acme-v02.api.letsencrypt.org/directory
    manual:
      key:
      cert:
    secret:
      name:
      key: tls.key
      crt: tls.crt
    hosts: []

# singleuser relates to the configuration of KubeSpawner which runs in the hub
# pod, and its spawning of user pods such as jupyter-myusername.
singleuser:
  cmd:
  uid: 0
  fsGid: 0
  extraEnv:
    GRANT_SUDO: "yes"
    NOTEBOOK_ARGS: "--allow-root"
  nodeSelector:
    dedicated: jupyterhub
  podNameTemplate:
  extraTolerations: []
  extraNodeAffinity:
    required: []
    preferred: []
  extraPodAffinity:
    required: []
    preferred: []
  extraPodAntiAffinity:
    required: []
    preferred: []
  networkTools:
    image:
      name: quay.io/jupyterhub/k8s-network-tools
      tag: "4.2.0"
      pullPolicy:
      pullSecrets: []
    resources: {}
  cloudMetadata:
    # block set to true will append a privileged initContainer using the
    # iptables to block the sensitive metadata server at the provided ip.
    blockWithIptables: true
  networkPolicy:
    enabled: true
    ingress: []
    egress: []
    egressAllowRules:
      cloudMetadataServer: false
      dnsPortsCloudMetadataServer: true
      dnsPortsKubeSystemNamespace: true
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: false
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  events: true
  extraAnnotations: {}
  extraLabels:
    hub.jupyter.org/network-access-hub: "true"
  extraFiles: {}
  extraEnv: {}
  lifecycleHooks: {}
  initContainers: []
  extraContainers: []
  allowPrivilegeEscalation: false
  serviceAccountName:
  storage:
    type: dynamic
    extraLabels: {}
    static:
      pvcName:
      subPath: "{username}"
    capacity: 10Gi
    homeMountPath: /home/{username}
    dynamic:
      storageClass:
      pvcNameTemplate:
      volumeNameTemplate: volume-{user_server}
      storageAccessModes: [ReadWriteMany]
      subPath:
    extraVolumes:
      - name: mapr-client-pvc
        persistentVolumeClaim:
          claimName: mapr-client-pvc
    extraVolumeMounts:
      - name: mapr-client-pvc
        mountPath: /opt/mapr
        readOnly: true
  image:
    name: jupyter/base-notebook
    tag: "python-3.10"
    pullPolicy:
    pullSecrets: []
  startTimeout: 300
  cpu:
    limit:
    guarantee:
  memory:
    limit:
    guarantee: 1G
  extraResource:
    limits: {}
    guarantees: {}
#  cmd: jupyterhub-singleuser
  defaultUrl:
  extraPodConfig: {}
  profileList: []

# scheduling relates to the user-scheduler pods and user-placeholder pods.
scheduling:
  userScheduler:
    enabled: true
    revisionHistoryLimit:
    replicas: 2
    logLevel: 4
    # plugins are configured on the user-scheduler to make us score how we
    # schedule user pods in a way to help us schedule on the most busy node. By
    # doing this, we help scale down more effectively. It isn't obvious how to
    # enable/disable scoring plugins, and configure them, to accomplish this.
    #
    # plugins ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins-1
    # migration ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduler-configuration-migrations
    #
    plugins:
      score:
        # We make use of the default scoring plugins, but we re-enable some with
        # a new priority, leave some enabled with their lower default priority,
        # and disable some.
        #
        # Below are the default scoring plugins as of 2024-09-23 according to
        # https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins.
        #
        # Re-enabled with high priority:
        # - NodeAffinity
        # - InterPodAffinity
        # - NodeResourcesFit
        # - ImageLocality
        #
        # Remains enabled with low default priority:
        # - TaintToleration
        # - PodTopologySpread
        # - VolumeBinding
        #
        # Disabled for scoring:
        # - NodeResourcesBalancedAllocation
        #
        disabled:
          # We disable these plugins (with regards to scoring) to not interfere
          # or complicate our use of NodeResourcesFit.
          - name: NodeResourcesBalancedAllocation
          # Disable plugins to be allowed to enable them again with a different
          # weight and avoid an error.
          - name: NodeAffinity
          - name: InterPodAffinity
          - name: NodeResourcesFit
          - name: ImageLocality
        enabled:
          - name: NodeAffinity
            weight: 14631
          - name: InterPodAffinity
            weight: 1331
          - name: NodeResourcesFit
            weight: 121
          - name: ImageLocality
            weight: 11
    pluginConfig:
      # Here we declare that we should optimize pods to fit based on a
      # MostAllocated strategy instead of the default LeastAllocated.
      - name: NodeResourcesFit
        args:
          scoringStrategy:
            type: MostAllocated
            resources:
              - name: cpu
                weight: 1
              - name: memory
                weight: 1
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      # IMPORTANT: Bumping the minor version of this binary should go hand in
      #            hand with an inspection of the user-scheduelr's RBAC
      #            resources that we have forked in
      #            templates/scheduling/user-scheduler/rbac.yaml.
      #
      #            Debugging advice:
      #
      #            - Is configuration of kube-scheduler broken in
      #              templates/scheduling/user-scheduler/configmap.yaml?
      #
      #            - Is the kube-scheduler binary's compatibility to work
      #              against a k8s api-server that is too new or too old?
      #
      #            - You can update the GitHub workflow that runs tests to
      #              include "deploy/user-scheduler" in the k8s namespace report
      #              and reduce the user-scheduler deployments replicas to 1 in
      #              dev-config.yaml to get relevant logs from the user-scheduler
      #              pods. Inspect the "Kubernetes namespace report" action!
      #
      #            - Typical failures are that kube-scheduler fails to search for
      #              resources via its "informers", and won't start trying to
      #              schedule pods before they succeed which may require
      #              additional RBAC permissions or that the k8s api-server is
      #              aware of the resources.
      #
      #            - If "successfully acquired lease" can be seen in the logs, it
      #              is a good sign kube-scheduler is ready to schedule pods.
      #
      name: registry.k8s.io/kube-scheduler
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow. The minor version is pinned in the
      # workflow, and should be updated there if a minor version bump is done
      # here. We aim to stay around 1 minor version behind the latest k8s
      # version.
      #
      tag: "v1.30.11" # ref: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
      pullPolicy:
      pullSecrets: []
    nodeSelector: {}
    tolerations: []
    labels: {}
    annotations: {}
    pdb:
      enabled: true
      maxUnavailable: 1
      minAvailable:
    resources: {}
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  podPriority:
    enabled: false
    globalDefault: false
    defaultPriority: 0
    imagePullerPriority: -5
    userPlaceholderPriority: -10
  userPlaceholder:
    enabled: true
    image:
      name: registry.k8s.io/pause
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      # If you update this, also update prePuller.pause.image.tag
      #
      tag: "3.10"
      pullPolicy:
      pullSecrets: []
    revisionHistoryLimit:
    replicas: 0
    labels: {}
    annotations: {}
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    resources: {}
    extraPodSpec: {}
  corePods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: core
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: core
        effect: NoSchedule
      - key: dedicated
        operator: Equal
        value: jupyterhub
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer
  userPods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: user
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: user
        effect: NoSchedule
      - key: dedicated
        operator: Equal
        value: jupyterhub
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer

# prePuller relates to the hook|continuous-image-puller DaemonsSets
prePuller:
  revisionHistoryLimit:
  labels: {}
  annotations: {}
  resources: {}
  containerSecurityContext:
    runAsNonRoot: true
    runAsUser: 65534 # nobody user
    runAsGroup: 65534 # nobody group
    allowPrivilegeEscalation: false
    capabilities:
      drop: ["ALL"]
    seccompProfile:
      type: "RuntimeDefault"
  extraTolerations: []
  # hook relates to the hook-image-awaiter Job and hook-image-puller DaemonSet
  hook:
    enabled: true
    pullOnlyOnChanges: true
    # image and the configuration below relates to the hook-image-awaiter Job
    image:
      name: quay.io/jupyterhub/k8s-image-awaiter
      tag: "4.2.0"
      pullPolicy:
      pullSecrets: []
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    podSchedulingWaitDuration: 10
    nodeSelector: {}
    tolerations: []
    resources: {}
    # Service Account for the hook-image-awaiter Job
    serviceAccount:
      create: true
      name:
      annotations: {}
    # Service Account for the hook-image-puller DaemonSet
    serviceAccountImagePuller:
      create: true
      name:
      annotations: {}
  continuous:
    enabled: true
    serviceAccount:
      create: true
      name:
      annotations: {}
  pullProfileListImages: true
  extraImages: {}
  pause:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: ["ALL"]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: registry.k8s.io/pause
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      # If you update this, also update scheduling.userPlaceholder.image.tag
      #
      tag: "3.10"
      pullPolicy:
      pullSecrets: []

ingress:
  enabled: false
  annotations: {}
  ingressClassName:
  hosts: []
  pathSuffix:
  pathType: Prefix
  tls: []
  extraPaths: []

# cull relates to the jupyterhub-idle-culler service, responsible for evicting
# inactive singleuser pods.
#
# The configuration below, except for enabled, corresponds to command-line flags
# for jupyterhub-idle-culler as documented here:
# https://github.com/jupyterhub/jupyterhub-idle-culler#as-a-standalone-script
#
cull:
  enabled: true
  users: false # --cull-users
  adminUsers: true # --cull-admin-users
  removeNamedServers: false # --remove-named-servers
  timeout: 3600 # --timeout
  every: 600 # --cull-every
  concurrency: 10 # --concurrency
  maxAge: 0 # --max-age

debug:
  enabled: True

global:
  safeToShowValues: false

@manics ..Here is my total Z2jh config. Still i see the single user pod is getting starting with Jovyan

Seems like pre_spawn_start is not a coroutine. Try removing gen.coroutine decorator and write it as a regular sync method. Moreover, if you want to override the method, you should use the same name pre_spawn_start and not kubespawner_pre_spawn_start.

I made the change. But looks like pre_spawn_start itself is not getting triggered

That means your extraConfig is no properly set. Can you try with following?

extraConfig:
    config.py: |
      from ldapauthenticator import LDAPAuthenticator

      class LDAPAuthenticatorExtend(LDAPAuthenticator):
        allow_all = True
        def pre_spawn_start(self, user, spawner):
          self.log.debug('running preSpawn hook')
          auth_state = spawner.user.get_auth_state()
          self.log.debug('pre_spawn_start auth_state:%s' % auth_state)
          spawner.environment["NB_UID"] = str(auth_state["uidNumber"][0])
          spawner.environment["NB_GID"] = str(auth_state["gidNumber"][0])
          spawner.environment["NB_USER"] = str(auth_state["uid"][0])

      c.JupyterHub.authenticator_class = LDAPAuthenticatorExtend
      c.JupyterHub.spawner_class = KubeSpawner
1 Like

@mahendrapaipuri I tried with the config you provided. But still i don’t see it’s getting called.
Logs from the single user pod.

Defaulted container "notebook" out of: notebook, block-cloud-metadata (init)
Entered start.sh with args: jupyterhub-singleuser --ip=0.0.0.0
Running hooks in: /usr/local/bin/start-notebook.d as uid: 0 gid: 0
Done running hooks in: /usr/local/bin/start-notebook.d
Running hooks in: /usr/local/bin/before-notebook.d as uid: 0 gid: 0
Done running hooks in: /usr/local/bin/before-notebook.d
Running as jovyan: jupyterhub-singleuser --ip=0.0.0.0

The hook is run before the server pod is created. Can you share your hub logs with debug enabled?

1 Like

@manics Please find the hub pod logs

[D 2025-07-08 14:50:30.577 JupyterHub application:908] Looking for /usr/local/etc/jupyterhub/jupyterhub_config in /srv/jupyterhub
Loading /usr/local/etc/jupyterhub/secret/values.yaml
No config at /usr/local/etc/jupyterhub/existing-secret/values.yaml
[D 2025-07-08 14:50:30.850 JupyterHub application:929] Loaded config file: /usr/local/etc/jupyterhub/jupyterhub_config.py
[I 2025-07-08 14:50:30.861 JupyterHub app:3346] Running JupyterHub version 5.2.1
[I 2025-07-08 14:50:30.861 JupyterHub app:3376] Using Authenticator: ldapauthenticator.ldapauthenticator.LDAPAuthenticator-2.0.2
[I 2025-07-08 14:50:30.861 JupyterHub app:3376] Using Spawner: kubespawner.spawner.KubeSpawner-7.0.0
[I 2025-07-08 14:50:30.861 JupyterHub app:3376] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-5.2.1
/usr/local/lib/python3.12/site-packages/jupyter_events/schema.py:68: JupyterEventsVersionWarning: The `version` property of an event schema must be a string. It has been type coerced, but in a future version of this library, it will fail to validate. Please update schema: https://schema.jupyter.org/jupyterhub/events/server-action
  validate_schema(_schema)
[D 2025-07-08 14:50:30.863 JupyterHub app:1998] Connecting to db: sqlite:///jupyterhub.sqlite
[D 2025-07-08 14:50:30.894 JupyterHub orm:1509] database schema version found: 4621fec11365
[D 2025-07-08 14:50:30.902 JupyterHub orm:1509] database schema version found: 4621fec11365
[D 2025-07-08 14:50:30.949 JupyterHub app:2338] Loading roles into database
[D 2025-07-08 14:50:30.950 JupyterHub app:2347] Loading role jupyterhub-idle-culler
[W 2025-07-08 14:50:30.955 JupyterHub app:2120]
    JupyterHub.admin_users is deprecated since version 0.7.2.
    Use Authenticator.admin_users instead.
[I 2025-07-08 14:50:31.061 JupyterHub app:2919] Creating service jupyterhub-idle-culler without oauth.
[D 2025-07-08 14:50:31.066 JupyterHub app:2685] Purging expired APITokens
[D 2025-07-08 14:50:31.085 JupyterHub app:2685] Purging expired OAuthCodes
[D 2025-07-08 14:50:31.088 JupyterHub app:2685] Purging expired Shares
[D 2025-07-08 14:50:31.090 JupyterHub app:2685] Purging expired ShareCodes
[D 2025-07-08 14:50:31.093 JupyterHub app:2459] Loading role assignments from config
[D 2025-07-08 14:50:31.112 JupyterHub app:2970] Initializing spawners
[D 2025-07-08 14:50:31.126 JupyterHub user:496] Creating <class 'kubespawner.spawner.KubeSpawner'> for userid:
[D 2025-07-08 14:50:31.128 JupyterHub app:3100] Loading state for userid from db
[D 2025-07-08 14:50:31.128 JupyterHub app:3111] Awaiting checks for 1 possibly-running spawners
[I 2025-07-08 14:50:31.061 JupyterHub app:2919] Creating service jupyterhub-idle-culler without oauth.
[D 2025-07-08 14:50:31.066 JupyterHub app:2685] Purging expired APITokens
[D 2025-07-08 14:50:31.085 JupyterHub app:2685] Purging expired OAuthCodes
[D 2025-07-08 14:50:31.088 JupyterHub app:2685] Purging expired Shares
[D 2025-07-08 14:50:31.090 JupyterHub app:2685] Purging expired ShareCodes
[D 2025-07-08 14:50:31.093 JupyterHub app:2459] Loading role assignments from config
[D 2025-07-08 14:50:31.112 JupyterHub app:2970] Initializing spawners
[D 2025-07-08 14:50:31.126 JupyterHub user:496] Creating <class 'kubespawner.spawner.KubeSpawner'> for userid:
[D 2025-07-08 14:50:31.128 JupyterHub app:3100] Loading state for userid from db
[D 2025-07-08 14:50:31.128 JupyterHub app:3111] Awaiting checks for 1 possibly-running spawners
[D 2025-07-08 14:50:31.135 JupyterHub app:3039] Verifying that userid is running at http://localhost:8888/user/userid/
[D 2025-07-08 14:50:31.135 JupyterHub utils:292] Waiting 30s for server at http://localhost:8888/user/userid/api
[I 2025-07-08 14:50:31.136 JupyterHub reflector:297] watching for pods with label selector='component=singleuser-server' in namespace k8s-jupyter
[D 2025-07-08 14:50:31.136 JupyterHub reflector:304] Connecting pods watcher
[D 2025-07-08 14:50:31.139 JupyterHub utils:328] Server at http://localhost:8888/user/userid/api responded in 0.00s
[W 2025-07-08 14:50:31.139 JupyterHub _version:67] jupyterhub version 5.2.1 != jupyterhub-singleuser version 4.0.2. This could cause failure to authenticate and result in redirect loops!
[I 2025-07-08 14:50:31.139 JupyterHub app:3053] userid still running
[D 2025-07-08 14:50:31.140 JupyterHub spawner:1475] Polling subprocess every 30s
[D 2025-07-08 14:50:31.140 JupyterHub app:3120] Loaded users:
      userid admin userid: running at <Server(localhost:8888)>
[I 2025-07-08 14:50:31.140 JupyterHub app:3416] Initialized 1 spawners in 0.028 seconds
[I 2025-07-08 14:50:31.144 JupyterHub metrics:373] Found 1 active users in the last ActiveUserPeriods.twenty_four_hours
[I 2025-07-08 14:50:31.144 JupyterHub metrics:373] Found 1 active users in the last ActiveUserPeriods.seven_days
[I 2025-07-08 14:50:31.144 JupyterHub metrics:373] Found 2 active users in the last ActiveUserPeriods.thirty_days
[I 2025-07-08 14:50:31.145 JupyterHub app:3703] Not starting proxy
[D 2025-07-08 14:50:31.145 JupyterHub proxy:925] Proxy: Fetching GET http://proxy-api:8001/api/routes
[I 2025-07-08 14:50:31.146 JupyterHub app:3739] Hub API listening on http://:8081/hub/
[I 2025-07-08 14:50:31.146 JupyterHub app:3741] Private Hub API connect url http://hub:8081/hub/
[I 2025-07-08 14:50:31.146 JupyterHub app:3615] Starting managed service jupyterhub-idle-culler
[I 2025-07-08 14:50:31.146 JupyterHub service:423] Starting service 'jupyterhub-idle-culler': ['python3', '-m', 'jupyterhub_idle_culler', '--url=http://localhost:8081/hub/api', '--timeout=3600', '--cull-every=600', '--concurrency=10']
[I 2025-07-08 14:50:31.147 JupyterHub service:136] Spawning python3 -m jupyterhub_idle_culler --url=http://localhost:8081/hub/api --timeout=3600 --cull-every=600 --concurrency=10
[D 2025-07-08 14:50:31.148 JupyterHub spawner:1475] Polling subprocess every 30s
[D 2025-07-08 14:50:31.149 JupyterHub proxy:389] Fetching routes to check
[D 2025-07-08 14:50:31.149 JupyterHub proxy:925] Proxy: Fetching GET http://proxy-api:8001/api/routes
[D 2025-07-08 14:50:31.150 JupyterHub proxy:392] Checking routes
[I 2025-07-08 14:50:31.150 JupyterHub app:3772] JupyterHub is now running, internal Hub API at http://hub:8081/hub/
[D 2025-07-08 14:50:31.150 JupyterHub app:3339] It took 0.580 seconds for the Hub to start
[D 2025-07-08 14:50:31.306 JupyterHub log:192] 200 GET /hub/health (@) 0.58ms
[D 2025-07-08 14:50:31.328 JupyterHub base:366] Recording first activity for <APIToken('6f82...', service='jupyterhub-idle-culler', client_id='jupyterhub')>
[I 2025-07-08 14:50:31.340 JupyterHub log:192] 200 GET /hub/api/ (jupyterhub-idle-culler@::1) 14.82ms
[D 2025-07-08 14:50:31.344 JupyterHub scopes:1010] Checking access to /hub/api/users via scope list:users
[I 2025-07-08 14:50:31.362 JupyterHub log:192] 200 GET /hub/api/users?state=[secret] (jupyterhub-idle-culler@::1) 20.40ms
[D 2025-07-08 14:50:33.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.35ms
[D 2025-07-08 14:50:35.307 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.45ms
[D 2025-07-08 14:50:37.307 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.58ms
[D 2025-07-08 14:50:39.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.44ms
[D 2025-07-08 14:50:41.140 JupyterHub reflector:390] pods watcher timeout
[D 2025-07-08 14:50:41.140 JupyterHub reflector:304] Connecting pods watcher
[D 2025-07-08 14:50:41.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.40ms
[D 2025-07-08 14:50:43.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.41ms
[D 2025-07-08 14:50:45.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.43ms
[D 2025-07-08 14:50:47.307 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.43ms
[D 2025-07-08 14:50:49.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.38ms
[D 2025-07-08 14:50:51.146 JupyterHub reflector:390] pods watcher timeout
[D 2025-07-08 14:50:51.147 JupyterHub reflector:304] Connecting pods watcher
[D 2025-07-08 14:50:51.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.36ms
[D 2025-07-08 14:50:53.307 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.38ms
[D 2025-07-08 14:50:55.307 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.44ms
[D 2025-07-08 14:50:57.307 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.40ms
[D 2025-07-08 14:50:58.024 JupyterHub base:411] Refreshing auth for userid
[I 2025-07-08 14:50:58.052 JupyterHub log:192] 200 GET /hub/home (userid@::ffff:) 51.29ms
[D 2025-07-08 14:50:58.101 JupyterHub log:192] 304 GET /hub/static/components/@fortawesome/fontawesome-free/webfonts/fa-solid-900.woff2 (@::ffff:) 0.97ms
[D 2025-07-08 14:50:58.102 JupyterHub log:192] 200 GET /hub/static/js/home.js?v=20250708145031 (@::ffff:) 0.43ms
[D 2025-07-08 14:50:58.116 JupyterHub log:192] 200 GET /hub/static/components/moment/moment.js?v=20250708145031 (@::ffff:) 0.50ms
[D 2025-07-08 14:50:58.116 JupyterHub log:192] 200 GET /hub/static/js/jhapi.js?v=20250708145031 (@::ffff:) 0.34ms
[D 2025-07-08 14:50:58.123 JupyterHub log:192] 200 GET /hub/static/js/utils.js?v=20250708145031 (@::ffff:) 0.79ms
[D 2025-07-08 14:50:59.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.63ms
[D 2025-07-08 14:50:59.938 JupyterHub scopes:1010] Checking access to /hub/admin via scope admin-ui
[I 2025-07-08 14:50:59.941 JupyterHub log:192] 200 GET /hub/admin (userid@::ffff:) 10.72ms
[D 2025-07-08 14:50:59.991 JupyterHub log:192] 304 GET /hub/static/components/@fortawesome/fontawesome-free/webfonts/fa-solid-900.woff2 (@::ffff:) 0.40ms
[W 2025-07-08 14:51:00.008 JupyterHub _xsrf_utils:195] Skipping XSRF check for insecure request GET /hub/api/users
[D 2025-07-08 14:51:00.008 JupyterHub scopes:1010] Checking access to /hub/api/users via scope list:users
[I 2025-07-08 14:51:00.020 JupyterHub log:192] 200 GET /hub/api/users?include_stopped_servers=1&offset=0&limit=50&name_filter=&sort=id&state=[secret]&_xsrf=[secret] (userid@::ffff:) 14.08ms
[D 2025-07-08 14:51:01.151 JupyterHub reflector:390] pods watcher timeout
[D 2025-07-08 14:51:01.152 JupyterHub reflector:304] Connecting pods watcher
[D 2025-07-08 14:51:01.306 JupyterHub log:192] 200 GET /hub/health (@0.0.0.0) 0.33ms
[D 2025-07-08 14:51:01.665 JupyterHub scopes:1010] Checking access to /hub/api/users/admin/server via scope delete:servers!server=admin/
[D 2025-07-08 14:51:01.668 JupyterHub user:496] Creating <class 'kubespawner.spawner.KubeSpawner'> for admin:
[I 2025-07-08 14:51:01.670 JupyterHub log:192] 204 DELETE /hub/api/users/admin/server?_xsrf=[secret] (userid@::ffff:) 8.18ms
[D 2025-07-08 14:51:01.674 JupyterHub scopes:1010] Checking access to /hub/api/users/user/server via scope delete:servers!server=user/
[D 2025-07-08 14:51:01.676 JupyterHub user:496] Creating <class 'kubespawner.spawner.KubeSpawner'> for user:
[I 2025-07-08 14:51:01.678 JupyterHub log:192] 204 DELETE /hub/api/users/user/server?_xsrf=[secret] (userid@::ffff:) 7.38ms
Creating <class 'kubespawner.spawner.KubeSpawner'> for userid