The base image for zero-to-jupyterhub-k8s

According to the documentation, if you want to customize an image, it should be based on jupyter/base-notebook. But I checked zero-to-jupyterhub-k8s’s default image, which is jupyterhub/k8s-singleuser-sample, and I found its Dockerfile on GitHub; it differs greatly from base-notebook, even though its README mentions that custom images should be based on jupyter/base-notebook. This confused me.

1 Like

Dear @vipcxj,

Thanks for the question. Could you open an issue on GitHub using Sign in to GitHub · GitHub as we should fix the information in the source code repository? Thanks!

1 Like

jupyterhub/k8s-singleuser-sample is a minimal image designed for testing Z2JH, so that we don’t need to download a multi-GB image for testing. If you’re very familiar with Jupyter server/lab/notebook you can biuld on top of it, but for most people the official Jupyter docker-stacks images include a lot of deafult packages and extra functions that are generally useful.

I have already tried the official Jupyter docker-stacks images, but the JupyterHub that used to work became unusable, and I followed the official documentation exactly. I opened an issue, but you labeled it as support, so since this isn’t a bug, could you help me solve it?I have already tried the official Jupyter docker-stacks images, but the JupyterHub that used to work became unusable, and I followed the official documentation exactly. I opened an issue, but you labeled it as support, so since this isn’t a bug, could you help me solve it?

Can you please share more details regarding your Kubernetes setup, including the logs and pod description of your spawned singleuser server? I couldn’t reproduce the described error (on OpenShift).

1 Like

I deployed JupyterHub on a k3s cluster created with k3d. Everything worked fine before using a custom image: the terminal accepted input, I updated pip in the terminal, and configured pip mirrors. Then I modified the custom image according to the documentation; the deployment showed no errors and everything still seemed fine—until I logged into Jupyter. At first I ran code in a notebook and noticed any code execution would hang. Then I opened Jupyter’s terminal and found I couldn’t type anything.

Here is my helm values.yaml

hub:
  config:
    Authenticator:
      manage_groups: true
      auth_state_groups_key: oauth_user.groups
    GenericOAuthenticator:
      login_service: keycloak
      username_claim: preferred_username
      scope:
        - openid
      userdata_params:
        state: state
      admin_groups:
        - /jupyter/admin
      allowed_groups:
        - /jupyter
    JupyterHub:
      admin_access: true
      authenticator_class: generic-oauth
singleuser:
  image:
    name: quay.io/jupyter/all-spark-notebook
    tag: 1757bd975be1 # 2025-12-22 python-3.13.11 conda-25.11.1 spark-4.1.0 notebook-7.5.1 python-3.13 java-17.0.17 r-4.5.2 mamba-2.4.0 ubuntu-24.04 hub-5.4.3 lab-4.5.1
  cmd: null
  networkPolicy:
    egressAllowRules:
      privateIPs: true
proxy:
  service:
    type: ClusterIP
ingress:
  enabled: true

Here is the command I upgrade the helm:

helm repo add jupyterhub https://hub.jupyter.org/helm-chart/
helm repo update jupyterhub
helm upgrade --install jupyterhub jupyterhub/jupyterhub \
    --namespace "${NAMESPACE}" --create-namespace \
    --version=4.3.1 \
    -f $CUR_DIR/values.yaml \
    --set-string "ingress.hosts[0]=${HOSTNAME}" \
    --set-string "hub.config.GenericOAuthenticator.client_id=${OIDC_CLIENT_ID}" \
    --set-string "hub.config.GenericOAuthenticator.client_secret=${OIDC_CLIENT_SECRET}" \
    --set-string "hub.config.GenericOAuthenticator.oauth_callback_url=${SCHEMA}://${HOSTNAME}:${PORT}/hub/oauth_callback" \
    --set-string "hub.config.GenericOAuthenticator.authorize_url=${OIDC_URI}/realms/big-data/protocol/openid-connect/auth" \
    --set-string "hub.config.GenericOAuthenticator.token_url=http://${CLUSTER_NAME}-keycloak-service:8180/realms/big-data/protocol/openid-connect/token" \
    --set-string "hub.config.GenericOAuthenticator.userdata_url=http://${CLUSTER_NAME}-keycloak-service:8180/realms/big-data/protocol/openid-connect/userinfo" \
    --set-string "hub.config.GenericOAuthenticator.logout_redirect_url=${OIDC_URI}/realms/big-data/protocol/openid-connect/logout?client_id=jupyter&post_logout_redirect_uri=${SCHEMA}%3A%2F%2F${HOSTNAME}%3A${PORT}%2Fhub" \
    --atomic --wait --timeout 15m

This is the yaml of the hub pod

apiVersion: v1
kind: Pod
metadata:
  annotations:
    checksum/config-map: 5480926184540f3990673e2365a48b240be64e036b2bd5303bc087c715533436
    checksum/secret: dcd93b9c828ee87cf113627001f7ce372b723c9532a003b91a189b6c617df6b6
  creationTimestamp: "2025-12-23T10:23:29Z"
  generateName: hub-54b8bf86cf-
  generation: 1
  labels:
    app: jupyterhub
    app.kubernetes.io/component: hub
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/name: jupyterhub
    component: hub
    hub.jupyter.org/network-access-proxy-api: "true"
    hub.jupyter.org/network-access-proxy-http: "true"
    hub.jupyter.org/network-access-singleuser: "true"
    pod-template-hash: 54b8bf86cf
    release: jupyterhub
  name: hub-54b8bf86cf-9rdfc
  namespace: big-data
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: hub-54b8bf86cf
    uid: 5a119c6d-ea01-4a50-ad8d-851ed56150a2
  resourceVersion: "129516"
  uid: ff974e4b-9b6f-4844-9fdc-402fac920716
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - preference:
          matchExpressions:
          - key: hub.jupyter.org/node-purpose
            operator: In
            values:
            - core
        weight: 100
  containers:
  - args:
    - jupyterhub
    - --config
    - /usr/local/etc/jupyterhub/jupyterhub_config.py
    - --upgrade-db
    env:
    - name: PYTHONUNBUFFERED
      value: "1"
    - name: HELM_RELEASE_NAME
      value: jupyterhub
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: CONFIGPROXY_AUTH_TOKEN
      valueFrom:
        secretKeyRef:
          key: hub.config.ConfigurableHTTPProxy.auth_token
          name: hub
    image: quay.io/jupyterhub/k8s-hub:4.3.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 30
      httpGet:
        path: /hub/health
        port: http
        scheme: HTTP
      initialDelaySeconds: 300
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 3
    name: hub
    ports:
    - containerPort: 8081
      name: http
      protocol: TCP
    readinessProbe:
      failureThreshold: 1000
      httpGet:
        path: /hub/health
        port: http
        scheme: HTTP
      periodSeconds: 2
      successThreshold: 1
      timeoutSeconds: 1
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsUser: 1000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /usr/local/etc/jupyterhub/jupyterhub_config.py
      name: config
      subPath: jupyterhub_config.py
    - mountPath: /usr/local/etc/jupyterhub/z2jh.py
      name: config
      subPath: z2jh.py
    - mountPath: /usr/local/etc/jupyterhub/config/
      name: config
    - mountPath: /usr/local/etc/jupyterhub/secret/
      name: secret
    - mountPath: /srv/jupyterhub
      name: pvc
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-hkb9q
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: k3d-big-data-agent-1
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 1000
    runAsNonRoot: true
    seccompProfile:
      type: RuntimeDefault
  serviceAccount: hub
  serviceAccountName: hub
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoSchedule
    key: hub.jupyter.org/dedicated
    operator: Equal
    value: core
  - effect: NoSchedule
    key: hub.jupyter.org_dedicated
    operator: Equal
    value: core
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - configMap:
      defaultMode: 420
      name: hub
    name: config
  - name: secret
    secret:
      defaultMode: 420
      secretName: hub
  - name: pvc
    persistentVolumeClaim:
      claimName: hub-db-dir
  - name: kube-api-access-hkb9q
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2025-12-23T10:23:30Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2025-12-23T10:23:29Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2025-12-23T10:23:33Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2025-12-23T10:23:33Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2025-12-23T10:23:29Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: containerd://6244d81c409c9b586cb0b2eec7ca648546b51f6ab9edd5639f900aba6e1f81e7
    image: quay.io/jupyterhub/k8s-hub:4.3.1
    imageID: quay.io/jupyterhub/k8s-hub@sha256:2385d7935320e3d489b386187ac46b3b09b00f263692a62b1721acd6fdda8721
    lastState: {}
    name: hub
    ready: true
    resources: {}
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2025-12-23T10:23:30Z"
    user:
      linux:
        gid: 1000
        supplementalGroups:
        - 100
        - 1000
        uid: 1000
    volumeMounts:
    - mountPath: /usr/local/etc/jupyterhub/jupyterhub_config.py
      name: config
    - mountPath: /usr/local/etc/jupyterhub/z2jh.py
      name: config
    - mountPath: /usr/local/etc/jupyterhub/config/
      name: config
    - mountPath: /usr/local/etc/jupyterhub/secret/
      name: secret
    - mountPath: /srv/jupyterhub
      name: pvc
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-hkb9q
      readOnly: true
      recursiveReadOnly: Disabled
  hostIP: 172.21.0.3
  hostIPs:
  - ip: 172.21.0.3
  phase: Running
  podIP: 10.42.1.47
  podIPs:
  - ip: 10.42.1.47
  qosClass: BestEffort
  startTime: "2025-12-23T10:23:29Z"

Additional information can be found in this issue

The Jupyter pod was shut down due to inactivity. I can’t log in from where I am now, so I can’t start it to retrieve its configuration.

Sorry, I’m not quite sure if I got this correctly. When you are talking about a “custom image”, you mean an official Jupyter Docker image instead of the default Z2JH test image, right?

Oh, that’s unfortunate because I think the logs and Kubernetes events of the hanging singleuser server are crucial imo. Also, next time you have access, taking a look at your browser’s developer tools may be helpful as well.

1 Like

Sorry, I’m not quite sure if I got this correctly. When you are talking about a “custom image”, you mean an official Jupyter Docker image instead of the default Z2JH test image, right?

Yes

Oh, that’s unfortunate because I think the logs and Kubernetes events of the hanging singleuser server are crucial imo. Also, next time you have access, taking a look at your browser’s developer tools may be helpful as well.

I have upload the logs of hub and jupyter pod logs to issue. I just can’t receive the yaml configure of the jupyter pod.

Could you also share the browser console logs and network requests? Also can you share your custom image Dockerfile? Cheers!

I didn’t use a Dockerfile I wrote myself; I used Quay. After leaving it alone for a day, I logged in again today and everything was back to normal. It probably wasn’t a restart issue, because yesterday I tried restarting the Jupyter pod several times — not via k8s restart, but through the JupyterHub admin console.

2 Likes