User placeholder deployment fails with not enough resources

Hey @consideRatio based on my git comment,

I’m running into an issue with user-placeholder since it’s getting deployed in the same namespace as other hub pods with minimal resources instead i wanted to deploy user-placeholder pods in single user namespace with more resources. There are multiple approaches. What’s best way to do it?

Issue:

jupyterhub   19s         Warning   FailedCreate              statefulset/user-placeholder           create Pod user-placeholder-0 in StatefulSet user-placeholder failed error: pods "user-placeholder-0" is forbidden: exceeded quota: jupyterhub-resourcequota, requested: limits.cpu=2,limits.memory=4G,requests.cpu=1,requests.memory=2G, used: limits.cpu=810m,limits.memory=1290Mi,requests.cpu=250m,requests.memory=640Mi, limited: limits.cpu=1,limits.memory=1536Mi,requests.cpu=300m,requests.memory=768Mi

As you can see user-placeholder pods are getting deployed in the jupyterhub namespace which has minimal resources compared to jupyter-users which is where the single-user pods are getting deployed. Since i’m not providing any resources to user-placeholder it is taking the resources given to single user pods.

What’s the best approach here?

  • Specify resources to user-placeholder in my config.yaml within the limit of jupyterhub namespace, so that there won’t be issue of resourcequota and when single user pods gets deployed it will be deployed in jupyter-users namespace as configured – Hope there won’t be any issues with this.

or

  • Change namespace to deploy user-placeholder pods in jupyter-users namespace since single-user pods will be deployed in that namespace.

If changing the namespace is the right approach, what’s the best way to do that since helm install .... -n jupyterhub will deploy all the pods in jupyterhub namespace.

Context:

We have two namespaces jupyterhub and jupyter-users .

  • jupyterhub namespace has hub , proxy , image-puller , user-scheduler pods running.
  • jupyter-users namespace has all the single-user pods.

This isolation was done based on our network policies.

I’m using c.KubeSpawner.namespace = 'jupyter-users' in my config.yaml to have this isolation between namespaces.

Deploy command:

helm upgrade --install jhub jupyterhub/jupyterhub --version 0.10.2 -n jupyterhub --values config.yaml

jupyterhub namespace has resourcequota enabled meaning any pod that gets deployed within the namespace needs to be within the range of the resourcequota provided.

Configuration:

  • Snippet from my config.yaml
singleuser:
    image:
        name: "xxxxxxxxx/jupyter-singleuser-z2jh"
        tag: "1.0"
        pullPolicy: Always
    # Change these values according to your machine
    memory:
        limit: 4.0G
        guarantee: 2.0G
    cpu:
        limit: 2.0
        guarantee: 1.0

scheduling:
    podPriority:
        enabled: true
    userPlaceholder:
        # Specify three dummy user pods will be used as placeholders
        replicas: 3
    # https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#scheduling-corepods-nodeaffinity
    corePods:
        nodeAffinity:
            # matchNodePurpose valid options:
            # - ignore
            # - prefer (the default)
            # - require
            matchNodePurpose: require
    # https://zero-to-jupyterhub.readthedocs.io/en/latest/resources/reference.html#scheduling-userpods-nodeaffinity
    userPods:
        nodeAffinity:
            # matchNodePurpose valid options:
            # - ignore
            # - prefer (the default)
            # - require
            matchNodePurpose: require
    userScheduler:
        enabled: true
        resources:
            requests:
                cpu: 50m
                memory: 128Mi
            limits:
                cpu: 200m
                memory: 256Mi
        containerSecurityContext:
            runAsUser: 65534  # nobody user
            runAsGroup: 65534 # nobody group
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true 
            capabilities: 
                add: ["NET_BIND_SERVICE", "NET_ADMIN"]
                drop: 
                    - ALL

Appreciate your help. Let me know if you have any questions.