Unable to get Issuer certificate

Hi,

I am setting up Jupyterhub in k8s with local issued ca.crt.
My hub pod is not able to start single user server and throws below error.
But I can exec into my hub pod to the location: /var/run/secrets/kubernetes.io/serviceaccount/ and successfully able to curl my api server using the certs and token provided by my server.

I guess jupyterhub is not referring to this directory while authenticating.

I have also tried: Configuring web-requests to use a proxy · Issue #217 · jupyterhub/oauthenticator · GitHub

[E 2023-03-24 10:27:38.618 JupyterHub reflector:385] Initial list of pods failed
    Traceback (most recent call last):
      File "/usr/local/lib/python3.11/site-packages/kubespawner/reflector.py", line 383, in start
        await self._list_and_update()
      File "/usr/local/lib/python3.11/site-packages/kubespawner/reflector.py", line 228, in _list_and_update
        initial_resources_raw = await list_method(**kwargs)
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/api_client.py", line 185, in __call_api
        response_data = await self.request(
                        ^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/rest.py", line 193, in GET
        return (await self.request("GET", url,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/rest.py", line 177, in request
        r = await self.pool_manager.request(**args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
        conn = await self._connector.connect(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
        proto = await self._create_connection(req, traces, timeout)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
        _, proto = await self._create_direct_connection(req, traces, timeout)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
        raise last_exc
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
        transp, proto = await self._wrap_create_connection(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
        raise ClientConnectorCertificateError(req.connection_key, exc) from exc
    aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host <redacted>:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:992)')]
    
[E 2023-03-24 10:27:38.622 JupyterHub spawner:2422] Reflector for pods failed to start.
    Traceback (most recent call last):
      File "/usr/local/lib/python3.11/site-packages/kubespawner/spawner.py", line 2420, in catch_reflector_start
        await f
      File "/usr/local/lib/python3.11/site-packages/kubespawner/reflector.py", line 383, in start
        await self._list_and_update()
      File "/usr/local/lib/python3.11/site-packages/kubespawner/reflector.py", line 228, in _list_and_update
        initial_resources_raw = await list_method(**kwargs)
                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/api_client.py", line 185, in __call_api
        response_data = await self.request(
                        ^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/rest.py", line 193, in GET
        return (await self.request("GET", url,
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/rest.py", line 177, in request
        r = await self.pool_manager.request(**args)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
        conn = await self._connector.connect(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
        proto = await self._create_connection(req, traces, timeout)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
        _, proto = await self._create_direct_connection(req, traces, timeout)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
        raise last_exc
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
        transp, proto = await self._wrap_create_connection(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
        raise ClientConnectorCertificateError(req.connection_key, exc) from exc
    aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host <redacted>:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:992)')]
    
ERROR:asyncio:Task exception was never retrieved
future: <Task finished name='Task-2323' coro=<KubeSpawner._start_reflector.<locals>.catch_reflector_start() done, defined at /usr/local/lib/python3.11/site-packages/kubespawner/spawner.py:2418> exception=SystemExit(1)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection
    return await self._loop.create_connection(*args, **kwargs)  # type: ignore[return-value]  # noqa
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/base_events.py", line 1112, in create_connection
    transport, protocol = await self._create_connection_transport(
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/asyncio/base_events.py", line 1145, in _create_connection_transport
    await waiter
  File "/usr/local/lib/python3.11/asyncio/sslproto.py", line 574, in _on_handshake_complete
    raise handshake_exc
  File "/usr/local/lib/python3.11/asyncio/sslproto.py", line 556, in _do_handshake
    self._sslobj.do_handshake()
  File "/usr/local/lib/python3.11/ssl.py", line 979, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:992)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/kubespawner/spawner.py", line 2420, in catch_reflector_start
    await f
  File "/usr/local/lib/python3.11/site-packages/kubespawner/reflector.py", line 383, in start
    await self._list_and_update()
  File "/usr/local/lib/python3.11/site-packages/kubespawner/reflector.py", line 228, in _list_and_update
    initial_resources_raw = await list_method(**kwargs)
                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/api_client.py", line 185, in __call_api
    response_data = await self.request(
                    ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/rest.py", line 193, in GET
    return (await self.request("GET", url,
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/kubernetes_asyncio/client/rest.py", line 177, in request
    r = await self.pool_manager.request(**args)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request
    conn = await self._connector.connect(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect
    proto = await self._create_connection(req, traces, timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _create_connection
    _, proto = await self._create_direct_connection(req, traces, timeout)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
    raise last_exc
  File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
    transp, proto = await self._wrap_create_connection(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
    raise ClientConnectorCertificateError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host <redacted>:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get issuer certificate (_ssl.c:992)')]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/jupyterhub/app.py", line 3350, in launch_instance
    loop.start()
  File "/usr/local/lib/python3.11/site-packages/tornado/platform/asyncio.py", line 215, in start
    self.asyncio_loop.run_forever()
  File "/usr/local/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
    self._run_once()
  File "/usr/local/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
    handle._run()
  File "/usr/local/lib/python3.11/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "/usr/local/lib/python3.11/site-packages/kubespawner/spawner.py", line 2423, in catch_reflector_start
    sys.exit(1)
SystemExit: 1
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending name='Task-2321' coro=<shared_client.<locals>.close_client_task() running at /usr/local/lib/python3.11/site-packages/kubespawner/clients.py:58> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Exception ignored in: <coroutine object shared_client.<locals>.close_client_task at 0x7f3536b11c60>
RuntimeError: coroutine ignored GeneratorExit
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending name='Task-2324' coro=<ResourceReflector.start() running at /usr/local/lib/python3.11/site-packages/kubespawner/reflector.py:383> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[Task.task_wakeup()]>
ERROR:asyncio:Task was destroyed but it is pending!
task: <Task pending name='Task-2325' coro=<KubeSpawner._start_reflector.<locals>.catch_reflector_start() running at /usr/local/lib/python3.11/site-packages/kubespawner/spawner.py:2420> wait_for=<Task pending name='Task-2324' coro=<ResourceReflector.start() done, defined at /usr/local/lib/python3.11/site-packages/kubespawner/reflector.py:370> wait_for=<Future pending cb=[Task.task_wakeup()]> cb=[Task.task_wakeup()]>>

Below is my configuration file:

import glob
import os
import re
import sys
from binascii import a2b_hex
# added few lib for sync

from tornado.httpclient import HTTPRequest
import logging
import subprocess

from jupyterhub.utils import url_path_join
from kubernetes_asyncio import client
from tornado.httpclient import AsyncHTTPClient



# Make sure that modules placed in the same directory as the jupyterhub config are added to the pythonpath
configuration_directory = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(0, configuration_directory)

from z2jh import (
    get_config,
    get_name,
    get_name_env,
    get_secret_value,
    set_config_if_not_none,
)


def camelCaseify(s):
    """convert snake_case to camelCase

    For the common case where some_value is set from someValue
    so we don't have to specify the name twice.
    """
    return re.sub(r"_([a-z])", lambda m: m.group(1).upper(), s)


# Configure JupyterHub to use the curl backend for making HTTP requests,
# rather than the pure-python implementations. The default one starts
# being too slow to make a large number of requests to the proxy API
# at the rate required.
#AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")

c.JupyterHub.spawner_class = "kubespawner.KubeSpawner"

# Connect to a proxy running in a different pod. Note that *_SERVICE_*
# environment variables are set by Kubernetes for Services
c.ConfigurableHTTPProxy.api_url = (
    f'http://{get_name("proxy-api")}:{get_name_env("proxy-api", "_SERVICE_PORT")}'
)

os.environ['PYCURL_SSL_LIBRARY'] = 'nss'
subprocess.call([sys.executable, '-m', 'pip', 'install', '--compile', '--proxy', f'http://{get_name("proxy-api")}:{get_name_env("proxy-api", "_SERVICE_PORT")}', 'pycurl'])
import pycurl

def configure_proxy(curl):
        logging.error(curl.getinfo(pycurl.EFFECTIVE_URL))
        # we only want google oauth to use the proxy
        if "microsoftonline" in curl.getinfo(pycurl.EFFECTIVE_URL):
            logging.error("adding proxy")
            curl.setopt(pycurl.PROXY, f'http://{get_name("proxy-api")}')
            curl.setopt(pycurl.PROXYPORT, f'{get_name_env("proxy-api", "_SERVICE_PORT")}')

# never do this
HTTPRequest._DEFAULTS['prepare_curl_callback'] = configure_proxy


c.ConfigurableHTTPProxy.should_start = False

# Do not shut down user pods when hub is restarted
c.JupyterHub.cleanup_servers = False

# Check that the proxy has routes appropriately setup
c.JupyterHub.last_activity_interval = 60

# Don't wait at all before redirecting a spawning user to the progress page
c.JupyterHub.tornado_settings = {
    "slow_spawn_timeout": 0,
}


# configure the hub db connection
db_type = get_config("hub.db.type")
if db_type == "sqlite-pvc":
    c.JupyterHub.db_url = "sqlite:///jupyterhub.sqlite"
elif db_type == "sqlite-memory":
    c.JupyterHub.db_url = "sqlite://"
else:
    set_config_if_not_none(c.JupyterHub, "db_url", "hub.db.url")
db_password = get_secret_value("hub.db.password", None)
if db_password is not None:
    if db_type == "mysql":
        os.environ["MYSQL_PWD"] = db_password
    elif db_type == "postgres":
        os.environ["PGPASSWORD"] = db_password
    else:
        print(f"Warning: hub.db.password is ignored for hub.db.type={db_type}")


# c.JupyterHub configuration from Helm chart's configmap
for trait, cfg_key in (
    ("concurrent_spawn_limit", None),
    ("active_server_limit", None),
    ("base_url", None),
    ("allow_named_servers", None),
    ("named_server_limit_per_user", None),
    ("authenticate_prometheus", None),
    ("redirect_to_server", None),
    ("shutdown_on_logout", None),
    ("template_paths", None),
    ("template_vars", None),
):
    if cfg_key is None:
        cfg_key = camelCaseify(trait)
    set_config_if_not_none(c.JupyterHub, trait, "hub." + cfg_key)

# hub_bind_url configures what the JupyterHub process within the hub pod's
# container should listen to.
hub_container_port = 8081
c.JupyterHub.hub_bind_url = f"http://:{hub_container_port}"

# hub_connect_url is the URL for connecting to the hub for use by external
# JupyterHub services such as the proxy. Note that *_SERVICE_* environment
# variables are set by Kubernetes for Services.
c.JupyterHub.hub_connect_url = (
    f'http://{get_name("hub")}:{get_name_env("hub", "_SERVICE_PORT")}'
)

# implement common labels
# this duplicates the jupyterhub.commonLabels helper
common_labels = c.KubeSpawner.common_labels = {}
common_labels["app"] = get_config(
    "nameOverride",
    default=get_config("Chart.Name", "jupyterhub"),
)
common_labels["heritage"] = "jupyterhub"
chart_name = get_config("Chart.Name")
chart_version = get_config("Chart.Version")
if chart_name and chart_version:
    common_labels["chart"] = "{}-{}".format(
        chart_name,
        chart_version.replace("+", "_"),
    )
release = get_config("Release.Name")
if release:
    common_labels["release"] = release

c.KubeSpawner.namespace = os.environ.get("POD_NAMESPACE", "default")

# Max number of consecutive failures before the Hub restarts itself
# requires jupyterhub 0.9.2
set_config_if_not_none(
    c.Spawner,
    "consecutive_failure_limit",
    "hub.consecutiveFailureLimit",
)

for trait, cfg_key in (
    ("pod_name_template", None),
    ("start_timeout", None),
    ("image_pull_policy", "image.pullPolicy"),
    # ('image_pull_secrets', 'image.pullSecrets'), # Managed manually below
    ("events_enabled", "events"),
    ("extra_labels", None),
    ("extra_annotations", None),
    # ("allow_privilege_escalation", None), # Managed manually below
    ("uid", None),
    ("fs_gid", None),
    ("service_account", "serviceAccountName"),
    ("storage_extra_labels", "storage.extraLabels"),
    # ("tolerations", "extraTolerations"), # Managed manually below
    ("node_selector", None),
    ("node_affinity_required", "extraNodeAffinity.required"),
    ("node_affinity_preferred", "extraNodeAffinity.preferred"),
    ("pod_affinity_required", "extraPodAffinity.required"),
    ("pod_affinity_preferred", "extraPodAffinity.preferred"),
    ("pod_anti_affinity_required", "extraPodAntiAffinity.required"),
    ("pod_anti_affinity_preferred", "extraPodAntiAffinity.preferred"),
    ("lifecycle_hooks", None),
    ("init_containers", None),
    ("extra_containers", None),
    ("mem_limit", "memory.limit"),
    ("mem_guarantee", "memory.guarantee"),
    ("cpu_limit", "cpu.limit"),
    ("cpu_guarantee", "cpu.guarantee"),
    ("extra_resource_limits", "extraResource.limits"),
    ("extra_resource_guarantees", "extraResource.guarantees"),
    ("environment", "extraEnv"),
    ("profile_list", None),
    ("extra_pod_config", None),
):
    if cfg_key is None:
        cfg_key = camelCaseify(trait)
    set_config_if_not_none(c.KubeSpawner, trait, "singleuser." + cfg_key)

image = get_config("singleuser.image.name")
if image:
    tag = get_config("singleuser.image.tag")
    if tag:
        image = f"{image}:{tag}"

    c.KubeSpawner.image = image

# allow_privilege_escalation defaults to False in KubeSpawner 2+. Since its a
# property where None, False, and True all are valid values that users of the
# Helm chart may want to set, we can't use the set_config_if_not_none helper
# function as someone may want to override the default False value to None.
#
c.KubeSpawner.allow_privilege_escalation = get_config(
    "singleuser.allowPrivilegeEscalation"
)

# Combine imagePullSecret.create (single), imagePullSecrets (list), and
# singleuser.image.pullSecrets (list).
image_pull_secrets = []
if get_config("imagePullSecret.automaticReferenceInjection") and get_config(
    "imagePullSecret.create"
):
    image_pull_secrets.append(get_name("image-pull-secret"))
if get_config("imagePullSecrets"):
    image_pull_secrets.extend(get_config("imagePullSecrets"))
if get_config("singleuser.image.pullSecrets"):
    image_pull_secrets.extend(get_config("singleuser.image.pullSecrets"))
if image_pull_secrets:
    c.KubeSpawner.image_pull_secrets = image_pull_secrets

# scheduling:
if get_config("scheduling.userScheduler.enabled"):
    c.KubeSpawner.scheduler_name = get_name("user-scheduler")
if get_config("scheduling.podPriority.enabled"):
    c.KubeSpawner.priority_class_name = get_name("priority")

# add node-purpose affinity
match_node_purpose = get_config("scheduling.userPods.nodeAffinity.matchNodePurpose")
if match_node_purpose:
    node_selector = dict(
        matchExpressions=[
            dict(
                key="hub.jupyter.org/node-purpose",
                operator="In",
                values=["user"],
            )
        ],
    )
    if match_node_purpose == "prefer":
        c.KubeSpawner.node_affinity_preferred.append(
            dict(
                weight=100,
                preference=node_selector,
            ),
        )
    elif match_node_purpose == "require":
        c.KubeSpawner.node_affinity_required.append(node_selector)
    elif match_node_purpose == "ignore":
        pass
    else:
        raise ValueError(
            f"Unrecognized value for matchNodePurpose: {match_node_purpose}"
        )

# Combine the common tolerations for user pods with singleuser tolerations
scheduling_user_pods_tolerations = get_config("scheduling.userPods.tolerations", [])
singleuser_extra_tolerations = get_config("singleuser.extraTolerations", [])
tolerations = scheduling_user_pods_tolerations + singleuser_extra_tolerations
if tolerations:
    c.KubeSpawner.tolerations = tolerations

# Configure dynamically provisioning pvc
storage_type = get_config("singleuser.storage.type")
if storage_type == "dynamic":
    pvc_name_template = get_config("singleuser.storage.dynamic.pvcNameTemplate")
    c.KubeSpawner.pvc_name_template = pvc_name_template
    volume_name_template = get_config("singleuser.storage.dynamic.volumeNameTemplate")
    c.KubeSpawner.storage_pvc_ensure = True
    set_config_if_not_none(
        c.KubeSpawner, "storage_class", "singleuser.storage.dynamic.storageClass"
    )
    set_config_if_not_none(
        c.KubeSpawner,
        "storage_access_modes",
        "singleuser.storage.dynamic.storageAccessModes",
    )
    set_config_if_not_none(
        c.KubeSpawner, "storage_capacity", "singleuser.storage.capacity"
    )

    # Add volumes to singleuser pods
    c.KubeSpawner.volumes = [
        {
            "name": volume_name_template,
            "persistentVolumeClaim": {"claimName": pvc_name_template},
        }
    ]
    c.KubeSpawner.volume_mounts = [
        {
            "mountPath": get_config("singleuser.storage.homeMountPath"),
            "name": volume_name_template,
        }
    ]
elif storage_type == "static":
    pvc_claim_name = get_config("singleuser.storage.static.pvcName")
    c.KubeSpawner.volumes = [
        {"name": "home", "persistentVolumeClaim": {"claimName": pvc_claim_name}}
    ]

    c.KubeSpawner.volume_mounts = [
        {
            "mountPath": get_config("singleuser.storage.homeMountPath"),
            "name": "home",
            "subPath": get_config("singleuser.storage.static.subPath"),
        }
    ]

# Inject singleuser.extraFiles as volumes and volumeMounts with data loaded from
# the dedicated k8s Secret prepared to hold the extraFiles actual content.
extra_files = get_config("singleuser.extraFiles", {})
if extra_files:
    volume = {
        "name": "files",
    }
    items = []
    for file_key, file_details in extra_files.items():
        # Each item is a mapping of a key in the k8s Secret to a path in this
        # abstract volume, the goal is to enable us to set the mode /
        # permissions only though so we don't change the mapping.
        item = {
            "key": file_key,
            "path": file_key,
        }
        if "mode" in file_details:
            item["mode"] = file_details["mode"]
        items.append(item)
    volume["secret"] = {
        "secretName": get_name("singleuser"),
        "items": items,
    }
    c.KubeSpawner.volumes.append(volume)

    volume_mounts = []
    for file_key, file_details in extra_files.items():
        volume_mounts.append(
            {
                "mountPath": file_details["mountPath"],
                "subPath": file_key,
                "name": "files",
            }
        )
    c.KubeSpawner.volume_mounts.extend(volume_mounts)

# Inject extraVolumes / extraVolumeMounts
c.KubeSpawner.volumes.extend(get_config("singleuser.storage.extraVolumes", []))
c.KubeSpawner.volume_mounts.extend(
    get_config("singleuser.storage.extraVolumeMounts", [])
)

c.JupyterHub.services = []
c.JupyterHub.load_roles = []

# jupyterhub-idle-culler's permissions are scoped to what it needs only, see
# https://github.com/jupyterhub/jupyterhub-idle-culler#permissions.
#
if get_config("cull.enabled", False):
    jupyterhub_idle_culler_role = {
        "name": "jupyterhub-idle-culler",
        "scopes": [
            "list:users",
            "read:users:activity",
            "read:servers",
            "delete:servers",
            # "admin:users", # dynamically added if --cull-users is passed
        ],
        # assign the role to a jupyterhub service, so it gains these permissions
        "services": ["jupyterhub-idle-culler"],
    }

    cull_cmd = ["python3", "-m", "jupyterhub_idle_culler"]
    base_url = c.JupyterHub.get("base_url", "/")
    cull_cmd.append("--url=http://localhost:8081" + url_path_join(base_url, "hub/api"))

    cull_timeout = get_config("cull.timeout")
    if cull_timeout:
        cull_cmd.append(f"--timeout={cull_timeout}")

    cull_every = get_config("cull.every")
    if cull_every:
        cull_cmd.append(f"--cull-every={cull_every}")

    cull_concurrency = get_config("cull.concurrency")
    if cull_concurrency:
        cull_cmd.append(f"--concurrency={cull_concurrency}")

    if get_config("cull.users"):
        cull_cmd.append("--cull-users")
        jupyterhub_idle_culler_role["scopes"].append("admin:users")

    if not get_config("cull.adminUsers"):
        cull_cmd.append("--cull-admin-users=false")

    if get_config("cull.removeNamedServers"):
        cull_cmd.append("--remove-named-servers")

    cull_max_age = get_config("cull.maxAge")
    if cull_max_age:
        cull_cmd.append(f"--max-age={cull_max_age}")

    c.JupyterHub.services.append(
        {
            "name": "jupyterhub-idle-culler",
            "command": cull_cmd,
        }
    )
    c.JupyterHub.load_roles.append(jupyterhub_idle_culler_role)

for key, service in get_config("hub.services", {}).items():
    # c.JupyterHub.services is a list of dicts, but
    # hub.services is a dict of dicts to make the config mergable
    service.setdefault("name", key)

    # As the api_token could be exposed in hub.existingSecret, we need to read
    # it it from there or fall back to the chart managed k8s Secret's value.
    service.pop("apiToken", None)
    service["api_token"] = get_secret_value(f"hub.services.{key}.apiToken")

    c.JupyterHub.services.append(service)

for key, role in get_config("hub.loadRoles", {}).items():
    # c.JupyterHub.load_roles is a list of dicts, but
    # hub.loadRoles is a dict of dicts to make the config mergable
    role.setdefault("name", key)

    c.JupyterHub.load_roles.append(role)

# respect explicit null command (distinct from unspecified)
# this avoids relying on KubeSpawner.cmd's default being None
_unspecified = object()
specified_cmd = get_config("singleuser.cmd", _unspecified)
if specified_cmd is not _unspecified:
    c.Spawner.cmd = specified_cmd

set_config_if_not_none(c.Spawner, "default_url", "singleuser.defaultUrl")

cloud_metadata = get_config("singleuser.cloudMetadata", {})

if cloud_metadata.get("blockWithIptables") == True:
    # Use iptables to block access to cloud metadata by default
    network_tools_image_name = get_config("singleuser.networkTools.image.name")
    network_tools_image_tag = get_config("singleuser.networkTools.image.tag")
    network_tools_resources = get_config("singleuser.networkTools.resources")
    ip_block_container = client.V1Container(
        name="block-cloud-metadata",
        image=f"{network_tools_image_name}:{network_tools_image_tag}",
        command=[
            "iptables",
            "-A",
            "OUTPUT",
            "-d",
            cloud_metadata.get("ip", "169.254.169.254"),
            "-j",
            "DROP",
        ],
        security_context=client.V1SecurityContext(
            privileged=True,
            run_as_user=0,
            capabilities=client.V1Capabilities(add=["NET_ADMIN"]),
        ),
        resources=network_tools_resources,
    )

    c.KubeSpawner.init_containers.append(ip_block_container)


if get_config("debug.enabled", False):
    c.JupyterHub.log_level = "DEBUG"
    c.Spawner.debug = True

# load potentially seeded secrets
#
# NOTE: ConfigurableHTTPProxy.auth_token is set through an environment variable
#       that is set using the chart managed secret.
c.JupyterHub.cookie_secret = get_secret_value("hub.config.JupyterHub.cookie_secret")
# NOTE: CryptKeeper.keys should be a list of strings, but we have encoded as a
#       single string joined with ; in the k8s Secret.
#
c.CryptKeeper.keys = get_secret_value("hub.config.CryptKeeper.keys").split(";")

# load hub.config values, except potentially seeded secrets already loaded
for app, cfg in get_config("hub.config", {}).items():
    if app == "JupyterHub":
        cfg.pop("proxy_auth_token", None)
        cfg.pop("cookie_secret", None)
        cfg.pop("services", None)
    elif app == "ConfigurableHTTPProxy":
        cfg.pop("auth_token", None)
    elif app == "CryptKeeper":
        cfg.pop("keys", None)
    c[app].update(cfg)

# load /usr/local/etc/jupyterhub/jupyterhub_config.d config files
config_dir = "/usr/local/etc/jupyterhub/jupyterhub_config.d"
if os.path.isdir(config_dir):
    for file_path in sorted(glob.glob(f"{config_dir}/*.py")):
        file_name = os.path.basename(file_path)
        print(f"Loading {config_dir} config: {file_name}")
        with open(file_path) as f:
            file_content = f.read()
        # compiling makes debugging easier: https://stackoverflow.com/a/437857
        exec(compile(source=file_content, filename=file_name, mode="exec"))

# execute hub.extraConfig entries
for key, config_py in sorted(get_config("hub.extraConfig", {}).items()):
    print(f"Loading extra config: {key}")
    exec(config_py)

Could you explain which certificate this refers to?

Could you also show us your Z2JH configuration (YAML) file?

Hi @manics it is referring to the the certificates inside Hub pod service account, which hub pod uses to interact with the API server when I want to login on hub UI to open single user server.
below is my configuration file:

fullnameOverride: ""
nameOverride:

custom: {}

imagePullSecret:
  create: false
  automaticReferenceInjection: true
  registry:
  username:
  password:
  email:

imagePullSecrets: []

hub:
  revisionHistoryLimit:
  config:
    JupyterHub:
      admin_access: true
      authenticator_class: dummy
  service:
    type: ClusterIP
    annotations: {}
    ports:
      nodePort:
    extraPorts: []
    loadBalancerIP:
  baseUrl: /
  cookieSecret:
  initContainers: []
  nodeSelector: {}
  tolerations: []
  concurrentSpawnLimit: 64
  consecutiveFailureLimit: 5
  activeServerLimit:
  deploymentStrategy:
    type: Recreate
  db:
    type: sqlite-pvc
    upgrade:
    pvc:
      annotations: {}
      selector: {}
      accessModes:
        - ReadWriteOnce
      storage: 1Gi
      subPath:
      storageClassName: longhorn
    url:
    password:
  labels: {}
  annotations: {}
  command: []
  args: []
  extraConfig: {}
  extraFiles: {}
  extraEnv: {}
  extraContainers: []
  extraVolumes: []
  extraVolumeMounts: []
  image:
    name: <redacted>
    tag: "0.5.0"
    pullPolicy:
    pullSecrets: []
  resources:
    limits:
      cpu: 450m
      memory: 1000Mi
    requests:
      cpu: 250m
      memory: 768Mi
  podSecurityContext:
    fsGroup: 1000
  containerSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
  lifecycle: {}
  loadRoles: {}
  services: {}
  pdb:
    enabled: false
    maxUnavailable:
    minAvailable: 1
  networkPolicy:
    enabled: true
    ingress: []
    egress: []
    egressAllowRules:
      cloudMetadataServer: false
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: true
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  allowNamedServers: false
  namedServerLimitPerUser:
  authenticatePrometheus:
  redirectToServer:
  shutdownOnLogout:
  templatePaths: []
  templateVars: {}
  livenessProbe:
    enabled: true
    initialDelaySeconds: 300
    periodSeconds: 10
    failureThreshold: 30
    timeoutSeconds: 3
  readinessProbe:
    enabled: true
    initialDelaySeconds: 10
    periodSeconds: 2
    failureThreshold: 1000
    timeoutSeconds: 1
  existingSecret:
  serviceAccount:
    create: true
    name:
    annotations: {}
  extraPodSpec: {}

rbac:
  create: true

proxy:
  secretToken:
  annotations: {}
  deploymentStrategy:
    type: Recreate
    rollingUpdate:
  # service relates to the proxy-public service
  service:
    type: ClusterIP
    labels: {}
    annotations: {}
    nodePorts:
      http:
      https:
    disableHttpPort: false
    extraPorts: []
    loadBalancerIP:
    loadBalancerSourceRanges: []
  chp:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "0.2.0" # https://github.com/jupyterhub/configurable-http-proxy/releases
      pullPolicy: IfNotPresent
      pullSecrets: []
    extraCommandLineFlags: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 60
      periodSeconds: 10
      failureThreshold: 30
      timeoutSeconds: 3
    readinessProbe:
      enabled: true
      initialDelaySeconds: 10
      periodSeconds: 2
      failureThreshold: 1000
      timeoutSeconds: 1
    resources:
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
    defaultTarget:
    errorTarget:
    extraEnv: {}
    nodeSelector: {}
    tolerations: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: false
        dnsPortsPrivateIPs: true
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    extraPodSpec: {}
  traefik:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "0.1.0" # ref: https://hub.docker.com/_/traefik?tab=tags
      pullPolicy:
      pullSecrets: []
    hsts:
      includeSubdomains: false
      preload: false
      maxAge: 15724800 # About 6 months
    resources:
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
    labels: {}
    extraInitContainers: []
    extraEnv: {}
    extraVolumes: []
    extraVolumeMounts: []
    extraStaticConfig: {}
    extraDynamicConfig: {}
    nodeSelector: {}
    tolerations: []
    extraPorts: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: false
        dnsPortsPrivateIPs: false
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  secretSync:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "0.2.0"
      pullPolicy:
      pullSecrets: []
    resources: 
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
  labels: {}
  https:
    enabled: false
    type: letsencrypt
    #type: letsencrypt, manual, offload, secret
    letsencrypt:
      contactEmail:
      # Specify custom server here (https://acme-staging-v02.api.letsencrypt.org/directory) to hit staging LE
      acmeServer: https://acme-v02.api.letsencrypt.org/directory
    manual:
      key:
      cert:
    secret:
      name:
      key: tls.key
      crt: tls.crt
    hosts: []

singleuser:
  podNameTemplate:
  extraTolerations: []
  nodeSelector: {}
  extraNodeAffinity:
    required: []
    preferred: []
  extraPodAffinity:
    required: []
    preferred: []
  extraPodAntiAffinity:
    required: []
    preferred: []
  networkTools:
    image:
      name: <redacted>
      tag: "0.2.0"
      pullPolicy:
      pullSecrets: []
    resources: 
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
  cloudMetadata:
    blockWithIptables: true
    ip: 169.254.169.254
  networkPolicy:
    enabled: true
    ingress: []
    egress: []
    egressAllowRules:
      cloudMetadataServer: false
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: true
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  events: true
  extraAnnotations: {}
  extraLabels:
    hub.jupyter.org/network-access-hub: "true"
  extraFiles: {}
  extraEnv: {}
  lifecycleHooks: {}
  initContainers: []
  extraContainers: []
  allowPrivilegeEscalation: false
  uid: 1000
  fsGid: 100
  serviceAccountName:
  storage:
    type: dynamic
    extraLabels: {}
    extraVolumes: []
    extraVolumeMounts: []
    static:
      pvcName:
      subPath: "{username}"
    capacity: 1Gi
    homeMountPath: /storage/data/jupyterhub/sunkumar
    dynamic:
      storageClass: longhorn
      pvcNameTemplate: claim-{username}{servername}
      volumeNameTemplate: volume-{username}{servername}
      storageAccessModes: [ReadWriteOnce]
  image:
    name: <redacted>
    tag: "0.3.0"
    pullPolicy:
    pullSecrets: []
  startTimeout: 300
  cpu:
    limit:
    guarantee:
  memory:
    limit:
    guarantee: 1G
  extraResource:
    limits: {}
    guarantees: {}
  cmd: jupyterhub-singleuser
  defaultUrl:
  extraPodConfig: {}
  profileList: []

scheduling:
  userScheduler:
    enabled: true
    revisionHistoryLimit:
    replicas: 2
    logLevel: 4
    plugins:
      score:
        disabled:
          - name: NodeResourcesBalancedAllocation
          - name: NodeAffinity
          - name: InterPodAffinity
          - name: NodeResourcesFit
          - name: ImageLocality
        enabled:
          - name: NodeAffinity
            weight: 14631
          - name: InterPodAffinity
            weight: 1331
          - name: NodeResourcesFit
            weight: 121
          - name: ImageLocality
            weight: 11
    pluginConfig:
      - name: NodeResourcesFit
        args:
          scoringStrategy:
            resources:
              - name: cpu
                weight: 1
              - name: memory
                weight: 1
            type: MostAllocated
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "v1.24.8" # ref: https://github.com/kubernetes/website/blob/1a795f8758ae8cc2b13000495309580a376c324d/content/en/releases/patch-releases.md
      pullPolicy: IfNotPresent
      pullSecrets: []
    nodeSelector: {}
    tolerations: []
    labels: {}
    annotations: {}
    pdb:
      enabled: true
      maxUnavailable: 1
      minAvailable:
    resources: 
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  podPriority:
    enabled: false
    globalDefault: false
    defaultPriority: 0
    imagePullerPriority: -5
    userPlaceholderPriority: -10
  userPlaceholder:
    enabled: false
    image:
      name: <redacted>
      tag: "3.7"
      pullPolicy: IfNotPresent
      pullSecrets: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 300
      periodSeconds: 10
      failureThreshold: 30
      timeoutSeconds: 3
    readinessProbe:
      enabled: true
      initialDelaySeconds: 10
      periodSeconds: 2
      failureThreshold: 1000
      timeoutSeconds: 1
    revisionHistoryLimit:
    replicas: 0
    labels: {}
    annotations: {}
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    resources: 
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
  corePods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: core
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: core
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer
  userPods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: user
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: user
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer

# prePuller relates to the hook|continuous-image-puller DaemonsSets
prePuller:
  revisionHistoryLimit:
  labels: {}
  annotations: {}
  resources:
    limits:
      cpu: 450m
      memory: 1000Mi
    requests:
      cpu: 250m
      memory: 768Mi
  containerSecurityContext:
    runAsNonRoot: true
    runAsUser: 65534 # nobody user
    runAsGroup: 65534 # nobody group
    allowPrivilegeEscalation: false
  extraTolerations: []
  # hook relates to the hook-image-awaiter Job and hook-image-puller DaemonSet
  hook:
    enabled: false
    pullOnlyOnChanges: true
    # image and the configuration below relates to the hook-image-awaiter Job
    image:
      name: <redacted>
      tag: "0.1.0"
      pullPolicy: IfNotPresent
      pullSecrets: []
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    livenessProbe:
      enabled: true
      initialDelaySeconds: 300
      periodSeconds: 10
      failureThreshold: 30
      timeoutSeconds: 3
    readinessProbe:
      enabled: true
      initialDelaySeconds: 10
      periodSeconds: 2
      failureThreshold: 1000
      timeoutSeconds: 1
    podSchedulingWaitDuration: 10
    nodeSelector: {}
    tolerations: []
    resources:
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
    serviceAccount:
      create: true
      name:
      annotations: {}
  continuous:
    enabled: false
  pullProfileListImages: true
  extraImages: {}
  pause:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "3.7"
      pullPolicy: IfNotPresent
      pullSecrets: []

ingress:
  enabled: true
  annotations:
    ingress.kubernetes.io/rewrite-target: /
    ingress.kubernetes.io/ingress.allow-http: "true"
  ingressClassName:
  hosts:
    - <redacted>
  pathSuffix:
  pathType: Prefix
  tls: []
cull:
  enabled: true
  users: false # --cull-users
  adminUsers: true # --cull-admin-users
  removeNamedServers: false # --remove-named-servers
  timeout: 3600 # --timeout
  every: 600 # --cull-every
  concurrency: 10 # --concurrency
  maxAge: 0 # --max-age

debug:
  enabled: true

global:
  safeToShowValues: false

Sorry, I still don’t understand. Which “API” server are you referring to- the Kubernetes API, the JupyterHub API, or the singleuser-server/jupyter-server API?

It looks like you’ve pasted the entire values.yaml file from the Helm chart which are the deafult, but you should need to specify the parameters you need to override. At the moment it’s very hard to see what you’ve configured- please could you show just your customisations? Thanks!

@manics I am referring to Kubernetes API. and port 443.
This scenario arises when I use my login details in hub UI and in the background spawner tries to spin up single user notebook server

Although I have not changed much from default configs, but I tried to reduce parameters for your better understanding:

fullnameOverride: ""
nameOverride:

custom: {}

imagePullSecret:
  create: false

imagePullSecrets: []

hub:
  config:
    JupyterHub:
      admin_access: true
      authenticator_class: dummy
  service:
    type: ClusterIP
    loadBalancerIP:
  baseUrl: /
  concurrentSpawnLimit: 64
  consecutiveFailureLimit: 5
  deploymentStrategy:
    type: Recreate
  db:
    type: sqlite-pvc
    pvc:
      accessModes:
        - ReadWriteOnce
      storage: 1Gi
      subPath:
      storageClassName: longhorn
  image:
    name: <redacted>
    tag: "0.5.0"
  resources:
    limits:
      cpu: 450m
      memory: 1000Mi
    requests:
      cpu: 250m
      memory: 768Mi
    fsGroup: 1000
  containerSecurityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
  pdb:
    enabled: false
  networkPolicy:
    enabled: true
    egressAllowRules:
      cloudMetadataServer: false
      privateIPs: true
  livenessProbe:
    enabled: true
    initialDelaySeconds: 300
    periodSeconds: 10
    failureThreshold: 30
    timeoutSeconds: 3
  readinessProbe:
    enabled: true
    initialDelaySeconds: 10
    periodSeconds: 2
    failureThreshold: 1000
    timeoutSeconds: 1
  serviceAccount:
    create: true
    name:

rbac:
  create: true

proxy:
  deploymentStrategy:
    type: Recreate
  service:
    type: ClusterIP
    disableHttpPort: false
  chp:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "0.2.0" 
      pullPolicy: IfNotPresent
    livenessProbe:
      enabled: true
      initialDelaySeconds: 60
      periodSeconds: 10
      failureThreshold: 30
      timeoutSeconds: 3
    readinessProbe:
      enabled: true
      initialDelaySeconds: 10
      periodSeconds: 2
      failureThreshold: 1000
      timeoutSeconds: 1
    resources:
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
    networkPolicy:
      enabled: true
      egressAllowRules:
        cloudMetadataServer: false
        privateIPs: true
      allowedIngressPorts: [http, https]
    pdb:
      enabled: false
  secretSync:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "0.2.0"
    resources: 
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
  https:
    enabled: false


singleuser:
  networkTools:
    image:
      name: <redacted>
      tag: "0.2.0"
    resources: 
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
  networkPolicy:
    enabled: true
    egressAllowRules:
      cloudMetadataServer: false
      privateIPs: true
  events: true
  allowPrivilegeEscalation: false
  uid: 1000
  fsGid: 100
  serviceAccountName:
  storage:
    type: dynamic
    static:
      pvcName:
      subPath: "{username}"
    capacity: 1Gi
    homeMountPath: /storage/data/jupyterhub/sunkumar
    dynamic:
      storageClass: longhorn
      pvcNameTemplate: claim-{username}{servername}
      volumeNameTemplate: volume-{username}{servername}
      storageAccessModes: [ReadWriteOnce]
  image:
    name: <redacted>
    tag: "0.3.0"
  startTimeout: 300
  memory:
    limit:
    guarantee: 1G
  cmd: jupyterhub-singleuser

scheduling:
  userScheduler:
    enabled: true
    replicas: 2
    logLevel: 4
    plugins:
      score:
        disabled:
          - name: NodeResourcesBalancedAllocation
          - name: NodeAffinity
          - name: InterPodAffinity
          - name: NodeResourcesFit
          - name: ImageLocality
        enabled:
          - name: NodeAffinity
            weight: 14631
          - name: InterPodAffinity
            weight: 1331
          - name: NodeResourcesFit
            weight: 121
          - name: ImageLocality
            weight: 11
    pluginConfig:
      - name: NodeResourcesFit
        args:
          scoringStrategy:
            resources:
              - name: cpu
                weight: 1
              - name: memory
                weight: 1
            type: MostAllocated
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
    image:
      name: <redacted>
      tag: "v1.24.8" 
      pullPolicy: IfNotPresent
    pdb:
      enabled: true
      maxUnavailable: 1
    resources: 
      limits:
        cpu: 450m
        memory: 1000Mi
      requests:
        cpu: 250m
        memory: 768Mi
    serviceAccount:
      create: true
      name:
  podPriority:
    enabled: false
  userPlaceholder:
    enabled: false
  corePods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: core
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: core
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer
  userPods:
    tolerations:
      - key: hub.jupyter.org/dedicated
        operator: Equal
        value: user
        effect: NoSchedule
      - key: hub.jupyter.org_dedicated
        operator: Equal
        value: user
        effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer

prePuller:
  hook:
    enabled: false
  continuous:
    enabled: false
  
ingress:
  enabled: true
  annotations:
    ingress.kubernetes.io/rewrite-target: /
    ingress.kubernetes.io/ingress.allow-http: "true"
  ingressClassName:
  hosts:
    - <redacted>
  pathSuffix:
  pathType: Prefix
  tls: []

cull:
  enabled: true
  users: false
  adminUsers: true 
  removeNamedServers: false 
  timeout: 3600 
  every: 600 
  concurrency: 10 
  maxAge: 0 

debug:
  enabled: true

global:
  safeToShowValues: false

JupyterHub on Z2JH uses a standard Kubernetes library to access to K8s API:

I’d start by investigating how that library behaves with your K8s cluster, and whether it’s a misconfiguration with your K8s deployment or a problem with that library.

Based on what you’ve said so far I can’t think of anything that can be done in JupyterHub.

Any updates to this? Having the same problem.