Deploying JupyterHub on OKD - `unable to validate against any security context constraint`

tl;dr:

I would like to know how to fix this error:

Error creating: pods "hook-image-awaiter-" is forbidden: unable to validate against any security context constraint: [
 provider "anyuid": Forbidden: not usable by user or serviceaccount,
 spec.containers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],

detailed version:

I would like to run a JupyterHub using OKD.
I do not grasp yet how (and what) to configure. I’ll post you what I did and what errors I received.

oc login --token=<my OKD API token> --server=<server-address:port>

Logged into "<server-address:port>" as "kube:admin" using the token provided.
You have access to 73 projects, the list has been suppressed. You can list all projects with 'oc projects'

oc new-project turing

Now using project "turing" on server "<server-address:port>".
You can add applications to this project with the 'new-app' command. For example, try:
oc new-app rails-postgresql-example
to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application:
kubectl create deployment hello-node --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 -- /agnhost serve- hostname

Now, following the Zero to JupyterHub guide I create an empty config.yaml

Following the next step I run:

helm repo add jupyterhub-discourse https://jupyterhub.github.io/helm-chart/

"jupyterhub-discourse" has been added to your repositories

helm repo update

Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jupyterhub-discourse" chart repository
Update Complete. ⎈Happy Helming!⎈

My helm upgrade settings explained:

  • “version1”: helm release number as in the docu
  • “jupyterhub/jupyterhub”: the jupyterhub image
  • “turing”: the namespace I created earlier
  • “1.2.0”: the current JupyterHub helm chart release
  • “config.yaml”: the empty file I created
helm upgrade --install version1 jupyterhub/jupyterhub --namespace turing --create-namespace --version=1.2.0 --values config.yaml --debug

history.go:56: [debug] getting history for release version1
Release "version1" does not exist. Installing it now.
install.go:178: [debug] Original chart version: "1.2.0"
install.go:195: [debug] CHART PATH: C:\Users\MUUQ04~1\AppData\Local\Temp\helm\repository\jupyterhub-1.2.0.tgz

client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-puller" DaemonSet
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" ServiceAccount
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" Role
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" RoleBinding
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" Job
client.go:128: [debug] creating 1 resource(s)
client.go:529: [debug] Watching for changes to Job hook-image-awaiter with timeout of 5m0s
client.go:557: [debug] Add/Modify event for hook-image-awaiter: ADDED
client.go:596: [debug] hook-image-awaiter: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
Error: failed pre-install: timed out waiting for the condition
helm.go:84: [debug] failed pre-install: timed out waiting for the condition

This is the same error as the error in this topic.
But when I run:

kubectl get events --sort-by='{.lastTimestamp}'

I get the following securityContext error:

LAST SEEN   TYPE      REASON         OBJECT                        MESSAGE

6m1s        Warning   FailedCreate   daemonset/hook-image-puller   
Error creating: pods "hook-image-puller-" is forbidden: unable to validate against any security context constraint: [
 provider "anyuid": Forbidden: not usable by user or serviceaccount,
 spec.initContainers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 spec.initContainers[1].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 spec.containers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 provider "nonroot": Forbidden: not usable by user or serviceaccount,
 provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
 provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount,
 provider "hostnetwork": Forbidden: not usable by user or serviceaccount,
 provider "hostaccess": Forbidden: not usable by user or serviceaccount,
 provider "node-exporter": Forbidden: not usable by user or serviceaccount,
 provider "privileged": Forbidden: not usable by user or serviceaccount]
 
18s         Warning   FailedCreate   job/hook-image-awaiter        
Error creating: pods "hook-image-awaiter-" is forbidden: unable to validate against any security context constraint: [
 provider "anyuid": Forbidden: not usable by user or serviceaccount,
 spec.containers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 provider "nonroot": Forbidden: not usable by user or serviceaccount,
 provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
 provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount,
 provider "hostnetwork": Forbidden: not usable by user or serviceaccount,
 provider "hostaccess": Forbidden: not usable by user or serviceaccount,
 provider "node-exporter": Forbidden: not usable by user or serviceaccount,
 provider "privileged": Forbidden: not usable by user or serviceaccount]

According to the OpenShift Docu my user is “nfsnobody”.

How and where can I configure a correct user?

What did not work:

1. Setting uid + fsGid the same way as the old discussion:

Trying to run it like in the other discussion leads to:

PS C:\Users\<user>\Documents\CDI\jupyter> helm upgrade --install version2 jupyterhub/jupyterhub --namespace turing --create-namespace --version=1.2.0 --values config.yaml
Error: UPGRADE FAILED: execution error at (jupyterhub/templates/NOTES.txt:154:4):

#################################################################################
######   BREAKING: The config values passed contained no longer accepted    #####
######             options. See the messages below for more details.        #####
######                                                                      #####
######             To verify your updated config is accepted, you can use   #####
######             the `helm template` command.                             #####
#################################################################################

RENAMED: hub.uid must as of 1.0.0 be configured using hub.containerSecurityContext.runAsUser

config.yaml:

proxy:
  secretToken: “sha256~incrediblesecretsecret”
hub:
  uid: 0
  fsGid: 0

trying to fix that error (1.0.0):

config.yaml changed to:

proxy:
  secretToken: “sha256~incrediblesecretsecret”
hub:
  containerSecurityContext:
    runAsUser: 1000

running it again returns:

PS C:\Users\<user>\Documents\CDI\jupyter> helm upgrade --install version2 jupyterhub/jupyterhub --namespace turing --create-namespace --version=1.2.0 --values config.yaml --debug
history.go:56: [debug] getting history for release version2
upgrade.go:142: [debug] preparing upgrade for version2
upgrade.go:150: [debug] performing update for version2
upgrade.go:322: [debug] creating upgraded release for version2
client.go:299: [debug] Starting delete for "hook-image-puller" DaemonSet
client.go:328: [debug] daemonsets.apps "hook-image-puller" not found
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" ServiceAccount
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" Role
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" RoleBinding
client.go:128: [debug] creating 1 resource(s)
client.go:299: [debug] Starting delete for "hook-image-awaiter" Job
client.go:328: [debug] jobs.batch "hook-image-awaiter" not found
client.go:128: [debug] creating 1 resource(s)
client.go:529: [debug] Watching for changes to Job hook-image-awaiter with timeout of 5m0s
client.go:557: [debug] Add/Modify event for hook-image-awaiter: ADDED
client.go:596: [debug] hook-image-awaiter: Jobs active: 0, jobs failed: 0, jobs succeeded: 0
upgrade.go:431: [debug] warning: Upgrade "version2" failed: pre-upgrade hooks failed: timed out waiting for the condition
Error: UPGRADE FAILED: pre-upgrade hooks failed: timed out waiting for the condition
helm.go:84: [debug] pre-upgrade hooks failed: timed out waiting for the condition
UPGRADE FAILED
main.newUpgradeCmd.func2
        helm.sh/helm/v3/cmd/helm/upgrade.go:199
github.com/spf13/cobra.(*Command).execute
        github.com/spf13/cobra@v1.3.0/command.go:856
github.com/spf13/cobra.(*Command).ExecuteC
        github.com/spf13/cobra@v1.3.0/command.go:974
github.com/spf13/cobra.(*Command).Execute
        github.com/spf13/cobra@v1.3.0/command.go:902
main.main
        helm.sh/helm/v3/cmd/helm/helm.go:83
runtime.main
        runtime/proc.go:255
runtime.goexit
        runtime/asm_amd64.s:1581
PS C:\Users\<user>\Documents\CDI\jupyter> kubectl get events --sort-by='{.lastTimestamp}'
3m16s       Warning   FailedCreate   job/hook-image-awaiter        
Error creating: pods "hook-image-awaiter-" is forbidden: unable to validate against any security context constraint: [
 provider "anyuid": Forbidden: not usable by user or serviceaccount,
 spec.containers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 provider "nonroot": Forbidden: not usable by user or serviceaccount,
 provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
 provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount,
 provider "hostnetwork": Forbidden: not usable by user or serviceaccount,
 provider "hostaccess": Forbidden: not usable by user or serviceaccount,
 provider "node-exporter": Forbidden: not usable by user or serviceaccount,
 provider "privileged": Forbidden: not usable by user or serviceaccount]
2m58s       Warning   FailedCreate   daemonset/hook-image-puller   
Error creating: pods "hook-image-puller-" is forbidden: unable to validate against any security context constraint: [
 provider "anyuid": Forbidden: not usable by user or serviceaccount,
 spec.initContainers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 spec.initContainers[1].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 spec.containers[0].securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000690000, 1000699999],
 provider "nonroot": Forbidden: not usable by user or serviceaccount,
 provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount,
 provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount,
 provider "hostnetwork": Forbidden: not usable by user or serviceaccount,
 provider "hostaccess": Forbidden: not usable by user or serviceaccount,
 provider "node-exporter": Forbidden: not usable by user or serviceaccount,
 provider "privileged": Forbidden: not usable by user or serviceaccount]

The user is still 65534.
I tried the same with runAsUser: 1000690042. Same result: timeout due to security context constraint error.

HowTo solve “unable to validate against any security context constraint

The list of providers in the error message are the default security context constraints. For details see: Red Hat OpenShift Docu.

My understanding of the error:

  • Kubernetes is trying out all the default security context constraints (SCCs). None of them work, alas the error.
  • we do not want to run JupyterHub with admin rights
  • restricted “Requires that a pod is run as a user in a pre-allocated range of UIDs”
  • for my OKD, that pre-allocated range changes with the creation of new pods

What I did

  1. add the JupyterHub helm chart repo (just like the guide)
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
  1. pipe the yaml into local file
 helm show values jupyterhub/jupyterhub > ./jupyterhub.yaml
  1. I changed all occurances of 65534 to be in the range the error states:
    from this:
    containerSecurityContext:
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group

to this:

    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group

that lead to the following error: (only the hub had the error now)

So I modified the following in the same way (except not picking the same int):

  containerSecurityContext:
    runAsUser: 1000 -> 1000830000
    runAsGroup: 1000 -> 1000830000
  1. use the modified .yaml for installation
helm install jupyterhub jupyterhub/jupyterhub --values .\jupyterhub.yaml --namespace blue --create-namespace
  1. apply changes like this:
helm upgrade jupyterhub jupyterhub/jupyterhub --values .\jupyterhub.yaml --namespace blue

Conclusion

I understand a bit more about SCCs. But now I’m getting this error:

The entire config file I am using atm:

Click to expand
# fullnameOverride and nameOverride distinguishes blank strings, null values,
# and non-blank strings. For more details, see the configuration reference.
fullnameOverride: ""
nameOverride:

# custom can contain anything you want to pass to the hub pod, as all passed
# Helm template values will be made available there.
custom: {}

# imagePullSecret is configuration to create a k8s Secret that Helm chart's pods
# can get credentials from to pull their images.
imagePullSecret:
  create: false
  automaticReferenceInjection: true
  registry:
  username:
  password:
  email:
# imagePullSecrets is configuration to reference the k8s Secret resources the
# Helm chart's pods can get credentials from to pull their images.
imagePullSecrets: []

# hub relates to the hub pod, responsible for running JupyterHub, its configured
# Authenticator class KubeSpawner, and its configured Proxy class
# ConfigurableHTTPProxy. KubeSpawner creates the user pods, and
# ConfigurableHTTPProxy speaks with the actual ConfigurableHTTPProxy server in
# the proxy pod.
hub:
  config:
    JupyterHub:
      admin_access: true
      authenticator_class: dummy
  service:
    type: ClusterIP
    annotations: {}
    ports:
      nodePort:
    extraPorts: []
    loadBalancerIP:
  baseUrl: /
  cookieSecret:
  initContainers: []
  fsGid: null
  nodeSelector: {}
  tolerations: []
  concurrentSpawnLimit: 64
  consecutiveFailureLimit: 5
  activeServerLimit:
  deploymentStrategy:
    ## type: Recreate
    ## - sqlite-pvc backed hubs require the Recreate deployment strategy as a
    ##   typical PVC storage can only be bound to one pod at the time.
    ## - JupyterHub isn't designed to support being run in parallell. More work
    ##   needs to be done in JupyterHub itself for a fully highly available (HA)
    ##   deployment of JupyterHub on k8s is to be possible.
    type: Recreate
  db:
    type: sqlite-pvc
    upgrade:
    pvc:
      annotations: {}
      selector: {}
      accessModes:
        - ReadWriteOnce
      storage: 1Gi
      subPath:
      storageClassName:
    url:
    password:
  labels: {}
  annotations: {}
  command: []
  args: []
  extraConfig: {}
  extraFiles: {}
  extraEnv: {}
  extraContainers: []
  extraVolumes: []
  extraVolumeMounts: []
  image:
    name: jupyterhub/k8s-hub
    tag: "1.2.0"
    pullPolicy:
    pullSecrets: []
  resources: {}
  containerSecurityContext:
    runAsUser: 1000830000
    runAsGroup: 1000830000
    allowPrivilegeEscalation: false
  lifecycle: {}
  services: {}
  pdb:
    enabled: false
    maxUnavailable:
    minAvailable: 1
  networkPolicy:
    enabled: true
    ingress: []
    ## egress for JupyterHub already includes Kubernetes internal DNS and
    ## access to the proxy, but can be restricted further, but ensure to allow
    ## access to the Kubernetes API server that couldn't be pinned ahead of
    ## time.
    ##
    ## ref: https://stackoverflow.com/a/59016417/2220152
    egress:
      - to:
          - ipBlock:
              cidr: 0.0.0.0/0
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  allowNamedServers: false
  namedServerLimitPerUser:
  authenticatePrometheus:
  redirectToServer:
  shutdownOnLogout:
  templatePaths: []
  templateVars: {}
  livenessProbe:
    # The livenessProbe's aim to give JupyterHub sufficient time to startup but
    # be able to restart if it becomes unresponsive for ~5 min.
    enabled: true
    initialDelaySeconds: 300
    periodSeconds: 10
    failureThreshold: 30
    timeoutSeconds: 3
  readinessProbe:
    # The readinessProbe's aim is to provide a successful startup indication,
    # but following that never become unready before its livenessProbe fail and
    # restarts it if needed. To become unready following startup serves no
    # purpose as there are no other pod to fallback to in our non-HA deployment.
    enabled: true
    initialDelaySeconds: 0
    periodSeconds: 2
    failureThreshold: 1000
    timeoutSeconds: 1
  existingSecret:
  serviceAccount:
    annotations: {}
  extraPodSpec: {}

rbac:
  enabled: true

# proxy relates to the proxy pod, the proxy-public service, and the autohttps
# pod and proxy-http service.
proxy:
  secretToken:
  annotations: {}
  deploymentStrategy:
    ## type: Recreate
    ## - JupyterHub's interaction with the CHP proxy becomes a lot more robust
    ##   with this configuration. To understand this, consider that JupyterHub
    ##   during startup will interact a lot with the k8s service to reach a
    ##   ready proxy pod. If the hub pod during a helm upgrade is restarting
    ##   directly while the proxy pod is making a rolling upgrade, the hub pod
    ##   could end up running a sequence of interactions with the old proxy pod
    ##   and finishing up the sequence of interactions with the new proxy pod.
    ##   As CHP proxy pods carry individual state this is very error prone. One
    ##   outcome when not using Recreate as a strategy has been that user pods
    ##   have been deleted by the hub pod because it considered them unreachable
    ##   as it only configured the old proxy pod but not the new before trying
    ##   to reach them.
    type: Recreate
    ## rollingUpdate:
    ## - WARNING:
    ##   This is required to be set explicitly blank! Without it being
    ##   explicitly blank, k8s will let eventual old values under rollingUpdate
    ##   remain and then the Deployment becomes invalid and a helm upgrade would
    ##   fail with an error like this:
    ##
    ##     UPGRADE FAILED
    ##     Error: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    ##     Error: UPGRADE FAILED: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    rollingUpdate:
  # service relates to the proxy-public service
  service:
    type: LoadBalancer
    labels: {}
    annotations: {}
    nodePorts:
      http:
      https:
    disableHttpPort: false
    extraPorts: []
    loadBalancerIP:
    loadBalancerSourceRanges: []
  # chp relates to the proxy pod, which is responsible for routing traffic based
  # on dynamic configuration sent from JupyterHub to CHP's REST API.
  chp:
    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: jupyterhub/configurable-http-proxy
      tag: 4.5.0 # https://github.com/jupyterhub/configurable-http-proxy/releases
      pullPolicy:
      pullSecrets: []
    extraCommandLineFlags: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 60
      periodSeconds: 10
    readinessProbe:
      enabled: true
      initialDelaySeconds: 0
      periodSeconds: 2
      failureThreshold: 1000
    resources: {}
    defaultTarget:
    errorTarget:
    extraEnv: {}
    nodeSelector: {}
    tolerations: []
    networkPolicy:
      enabled: true
      ingress: []
      egress:
        - to:
            - ipBlock:
                cidr: 0.0.0.0/0
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    extraPodSpec: {}
  # traefik relates to the autohttps pod, which is responsible for TLS
  # termination when proxy.https.type=letsencrypt.
  traefik:
    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: traefik
      tag: v2.4.11 # ref: https://hub.docker.com/_/traefik?tab=tags
      pullPolicy:
      pullSecrets: []
    hsts:
      includeSubdomains: false
      preload: false
      maxAge: 15724800 # About 6 months
    resources: {}
    labels: {}
    extraEnv: {}
    extraVolumes: []
    extraVolumeMounts: []
    extraStaticConfig: {}
    extraDynamicConfig: {}
    nodeSelector: {}
    tolerations: []
    extraPorts: []
    networkPolicy:
      enabled: true
      ingress: []
      egress:
        - to:
            - ipBlock:
                cidr: 0.0.0.0/0
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    serviceAccount:
      annotations: {}
    extraPodSpec: {}
  secretSync:
    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: jupyterhub/k8s-secret-sync
      tag: "1.2.0"
      pullPolicy:
      pullSecrets: []
    resources: {}
  labels: {}
  https:
    enabled: false
    type: letsencrypt
    #type: letsencrypt, manual, offload, secret
    letsencrypt:
      contactEmail:
      # Specify custom server here (https://acme-staging-v02.api.letsencrypt.org/directory) to hit staging LE
      acmeServer: https://acme-v02.api.letsencrypt.org/directory
    manual:
      key:
      cert:
    secret:
      name:
      key: tls.key
      crt: tls.crt
    hosts: []

# singleuser relates to the configuration of KubeSpawner which runs in the hub
# pod, and its spawning of user pods such as jupyter-myusername.
singleuser:
  podNameTemplate:
  extraTolerations: []
  nodeSelector: {}
  extraNodeAffinity:
    required: []
    preferred: []
  extraPodAffinity:
    required: []
    preferred: []
  extraPodAntiAffinity:
    required: []
    preferred: []
  networkTools:
    image:
      name: jupyterhub/k8s-network-tools
      tag: "1.2.0"
      pullPolicy:
      pullSecrets: []
  cloudMetadata:
    # block set to true will append a privileged initContainer using the
    # iptables to block the sensitive metadata server at the provided ip.
    blockWithIptables: true
    ip: 169.254.169.254
  networkPolicy:
    enabled: true
    ingress: []
    egress:
      # Required egress to communicate with the hub and DNS servers will be
      # augmented to these egress rules.
      #
      # This default rule explicitly allows all outbound traffic from singleuser
      # pods, except to a typical IP used to return metadata that can be used by
      # someone with malicious intent.
      - to:
          - ipBlock:
              cidr: 0.0.0.0/0
              except:
                - 169.254.169.254/32
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  events: true
  extraAnnotations: {}
  extraLabels:
    hub.jupyter.org/network-access-hub: "true"
  extraFiles: {}
  extraEnv: {}
  lifecycleHooks: {}
  initContainers: []
  extraContainers: []
  uid: 1000
  fsGid: 100
  serviceAccountName:
  storage:
    type: dynamic
    extraLabels: {}
    extraVolumes: []
    extraVolumeMounts: []
    static:
      pvcName:
      subPath: "{username}"
    capacity: 10Gi
    homeMountPath: /home/jovyan
    dynamic:
      storageClass:
      pvcNameTemplate: claim-{username}{servername}
      volumeNameTemplate: volume-{username}{servername}
      storageAccessModes: [ReadWriteOnce]
  image:
    name: jupyterhub/k8s-singleuser-sample
    tag: "1.2.0"
    pullPolicy:
    pullSecrets: []
  startTimeout: 300
  cpu:
    limit:
    guarantee:
  memory:
    limit:
    guarantee: 1G
  extraResource:
    limits: {}
    guarantees: {}
  cmd: jupyterhub-singleuser
  defaultUrl:
  extraPodConfig: {}
  profileList: []

# scheduling relates to the user-scheduler pods and user-placeholder pods.
scheduling:
  userScheduler:
    enabled: true
    replicas: 2
    logLevel: 4
    # plugins ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins-1
    plugins:
      score:
        disabled:
          - name: SelectorSpread
          - name: TaintToleration
          - name: PodTopologySpread
          - name: NodeResourcesBalancedAllocation
          - name: NodeResourcesLeastAllocated
          # Disable plugins to be allowed to enable them again with a different
          # weight and avoid an error.
          - name: NodePreferAvoidPods
          - name: NodeAffinity
          - name: InterPodAffinity
          - name: ImageLocality
        enabled:
          - name: NodePreferAvoidPods
            weight: 161051
          - name: NodeAffinity
            weight: 14631
          - name: InterPodAffinity
            weight: 1331
          - name: NodeResourcesMostAllocated
            weight: 121
          - name: ImageLocality
            weight: 11
    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group
      allowPrivilegeEscalation: false
    image:
      # IMPORTANT: Bumping the minor version of this binary should go hand in
      #            hand with an inspection of the user-scheduelrs RBAC resources
      #            that we have forked.
      name: k8s.gcr.io/kube-scheduler
      tag: v1.19.13 # ref: https://github.com/kubernetes/website/blob/7c09c7b4be4b33ef573a2268b6adb94d6fa6615b/content/en/releases/patch-releases.md
      pullPolicy:
      pullSecrets: []
    nodeSelector: {}
    tolerations: []
    pdb:
      enabled: true
      maxUnavailable: 1
      minAvailable:
    resources: {}
    serviceAccount:
      annotations: {}
    extraPodSpec: {}
  podPriority:
    enabled: false
    globalDefault: false
    defaultPriority: 0
    userPlaceholderPriority: -10
  userPlaceholder:
    enabled: true
    image:
      name: k8s.gcr.io/pause
      # tag's can be updated by inspecting the output of the command:
      # gcloud container images list-tags k8s.gcr.io/pause --sort-by=~tags
      #
      # If you update this, also update prePuller.pause.image.tag
      tag: "3.5"
      pullPolicy:
      pullSecrets: []
    replicas: 0
    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group
      allowPrivilegeEscalation: false
    resources: {}
  corePods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: core
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: core
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer
  userPods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: user
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: user
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer

# prePuller relates to the hook|continuous-image-puller DaemonsSets
prePuller:
  annotations: {}
  resources: {}
  containerSecurityContext:
    runAsUser: 1000830001 # nobody user
    runAsGroup: 1000830001 # nobody group
    allowPrivilegeEscalation: false
  extraTolerations: []
  # hook relates to the hook-image-awaiter Job and hook-image-puller DaemonSet
  hook:
    enabled: true
    pullOnlyOnChanges: true
    # image and the configuration below relates to the hook-image-awaiter Job
    image:
      name: jupyterhub/k8s-image-awaiter
      tag: "1.2.0"
      pullPolicy:
      pullSecrets: []
    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group
      allowPrivilegeEscalation: false
    podSchedulingWaitDuration: 10
    nodeSelector: {}
    tolerations: []
    resources: {}
    serviceAccount:
      annotations: {}
  continuous:
    enabled: true
  pullProfileListImages: true
  extraImages: {}
  pause:
    containerSecurityContext:
      runAsUser: 1000830001 # nobody user
      runAsGroup: 1000830001 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: k8s.gcr.io/pause
      # tag's can be updated by inspecting the output of the command:
      # gcloud container images list-tags k8s.gcr.io/pause --sort-by=~tags
      #
      # If you update this, also update scheduling.userPlaceholder.image.tag
      tag: "3.5"
      pullPolicy:
      pullSecrets: []

ingress:
  enabled: false
  annotations: {}
  hosts: []
  pathSuffix:
  pathType: Prefix
  tls: []

# cull relates to the jupyterhub-idle-culler service, responsible for evicting
# inactive singleuser pods.
#
# The configuration below, except for enabled, corresponds to command-line flags
# for jupyterhub-idle-culler as documented here:
# https://github.com/jupyterhub/jupyterhub-idle-culler#as-a-standalone-script
#
cull:
  enabled: true
  users: false # --cull-users
  removeNamedServers: false # --remove-named-servers
  timeout: 3600 # --timeout
  every: 600 # --cull-every
  concurrency: 10 # --concurrency
  maxAge: 0 # --max-age

debug:
  enabled: false

global:
  safeToShowValues: false


Z2JH has two main default settings that may require elevated privileges:

Update on the state of JupyterHub on OKD:

I was able to get a pod up and running.
WARNING: manually bashing into the pod and editing it is not what you want to do.
But I wanted to share my progress, in case it helps someone.

Steps to reproduce:

  • generate the pod with the podman generated .yaml
Click to expand
michael@debian:~/Documents$ podman --runtime /usr/bin/crun generate kube ugh
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-3.0.1
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2022-05-30T14:25:54Z"
  labels:
    app: teri
  name: teri
spec:
  containers:
  - command:
    - jupyterhub
    env:
    - name: PATH
      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    - name: TERM
      value: xterm
    - name: container
      value: podman
    - name: DEBIAN_FRONTEND
      value: noninteractive
    - name: SHELL
      value: /bin/bash
    - name: LC_ALL
      value: en_US.UTF-8
    - name: LANG
      value: en_US.UTF-8
    - name: LANGUAGE
      value: en_US.UTF-8
    image: docker.io/jupyterhub/jupyterhub
    name: teri
    ports:
    - containerPort: 8000
      hostPort: 8500
      protocol: TCP
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities:
        drop:
        - CAP_MKNOD
        - CAP_NET_RAW
        - CAP_AUDIT_WRITE
      privileged: false
      readOnlyRootFilesystem: false
      runAsGroup: 0
      runAsUser: 0
      seLinuxOptions: {}
    workingDir: /srv/jupyterhub/
  dnsConfig: {}
status: {}
  • create a service for your pod like this
apiVersion: v1
kind: Service
metadata:
  name: teri-service
spec:
  selector:
    app: teri
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8000
  • run this command: oc expose service teri-service

  • Now you have a JupyterHub running, but it cannot spawn a Notebook

  • So I followed this guide here

Click to expand copy of the guide

update the repositories

apt update

install python3 and related packages

apt-get install npm nodejs python3 python3-pip git nano

install jupyterhub and jupyterlab

python3 -m pip install jupyterhub notebook jupyterlab

install nodejs packages

npm install -g configurable-http-proxy

install native authenticator package

cd /home

git clone GitHub - jupyterhub/nativeauthenticator: JupyterHub-native User Authenticator https://native-authenticator.readthedocs.io/en/latest/

cd nativeauthenticator

pip3 install -e .

modify jupyterconfig

mkdir /etc/jupyterhub

cd /etc/jupyterhub

jupyterhub --generate-config -f jupyterhub_config.py

edit the jupyterhub config file

nano jupyterhub_config.py

import pwd, subprocess
c.JupyterHub.authenticator_class = 'nativeauthenticator.NativeAuthenticator'
c.Authenticator.admin_users = {'admin'}
def pre_spawn_hook(spawner):
    username = spawner.user.name
    try:
        pwd.getpwnam(username)
    except KeyError:
        subprocess.check_call(['useradd', '-ms', '/bin/bash', username])
c.Spawner.pre_spawn_hook = pre_spawn_hook
c.Spawner.default_url = '/lab'
  • Create a new user by entering your container with kubectl exec -ti teri -- bash
    • and running adduser <USERNAME HERE>
  • login and use the notebook

Final Update (just in case someone stumbles upon this issue):

I gave up on deploying JupyterHub on OKD and I would not recommend anyone to try.
Naively, I thought I could use Graham Dumpletons work. But that did not lead to anything productive.

Another option is to create some Security Context Constraints, e.g.,

oc adm policy add-scc-to-user anyuid -z hook-image-awaiter -n <ns>
oc adm policy add-scc-to-user anyuid -z hub -n <ns>
oc adm policy add-scc-to-user anyuid -z user-scheduler -n <ns>
oc adm policy add-scc-to-user anyuid -z default -n <ns>
oc adm policy add-scc-to-user privileged -z default -n <ns>