Kubernetes Helm Deployment Unable to Connect to Server Using IPv6

Can you check the new parameters were applied as expected using kubectl get serviced ... -o yaml (or describe).

Assuming it’s as expected I think it’s worth testing with a simpler minimal application, e.g. manually create YAML manifests for an Nginx deployment and service, and see if you can find a configuration that deploys a load balancer. Then we can work out how to get that into Z2JH.

1 Like

Sure thing, let me check and revert

Just noticed also , EKS cluster does not support IPv4 only ipv6 so dual stack will not work ?

See Below each yaml for each Service now updated, Ignore the Chart Version or Release I just updated to 4.0.1 and packaged with changes to release and test.

hub

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: jupyterhub
    meta.helm.sh/release-namespace: jupyterhub
    prometheus.io/path: /hub/metrics
    prometheus.io/port: "8081"
    prometheus.io/scrape: "true"
  creationTimestamp: "2025-01-07T23:31:08Z"
  labels:
    app: jupyterhub
    app.kubernetes.io/component: hub
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jupyterhub
    chart: jupyterhub-4.0.1
    component: hub
    helm.sh/chart: jupyterhub-4.0.1
    heritage: Helm
    release: jupyterhub
  name: hub
  namespace: jupyterhub
  resourceVersion: "997678035"
  uid: 01d74829-16b0-4518-b523-4a65879acfb5
spec:
  clusterIP: fd5f:c06e:e955::dae
  clusterIPs:
  - fd5f:c06e:e955::dae
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  ports:
  - name: hub
    port: 8081
    protocol: TCP
    targetPort: http
  selector:
    app: jupyterhub
    component: hub
    release: jupyterhub
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

proxy api

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: jupyterhub
    meta.helm.sh/release-namespace: jupyterhub
  creationTimestamp: "2025-01-07T23:31:08Z"
  labels:
    app: jupyterhub
    app.kubernetes.io/component: proxy-api
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jupyterhub
    chart: jupyterhub-4.0.1
    component: proxy-api
    helm.sh/chart: jupyterhub-4.0.1
    heritage: Helm
    release: jupyterhub
  name: proxy-api
  namespace: jupyterhub
  resourceVersion: "997678031"
  uid: 8ff9f5b7-8cea-4e9a-8723-0f191eb19d13
spec:
  clusterIP: fd5f:c06e:e955::fa5b
  clusterIPs:
  - fd5f:c06e:e955::fa5b
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  ports:
  - port: 8001
    protocol: TCP
    targetPort: api
  selector:
    app: jupyterhub
    component: proxy
    release: jupyterhub
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

proxy public

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: jupyterhub
    meta.helm.sh/release-namespace: jupyterhub
    service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  creationTimestamp: "2025-01-07T23:31:08Z"
  labels:
    app: jupyterhub
    app.kubernetes.io/component: proxy-public
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jupyterhub
    chart: jupyterhub-4.0.1
    component: proxy-public
    helm.sh/chart: jupyterhub-4.0.1
    heritage: Helm
    release: jupyterhub
  name: proxy-public
  namespace: jupyterhub
  resourceVersion: "997678041"
  uid: 3c7ad15b-6d5b-4209-8d39-0d172524629f
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: fd5f:c06e:e955::dd6d
  clusterIPs:
  - fd5f:c06e:e955::dd6d
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  loadBalancerClass: service.k8s.aws/nlb
  ports:
  - name: http
    nodePort: 32145
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: jupyterhub
    component: proxy
    release: jupyterhub
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}
NAME           TYPE           CLUSTER-IP             EXTERNAL-IP   PORT(S)        AGE
hub            ClusterIP      fd5f:c06e:e955::dae    <none>        8081/TCP       9m34s
proxy-api      ClusterIP      fd5f:c06e:e955::fa5b   <none>        8001/TCP       9m34s
proxy-public   LoadBalancer   fd5f:c06e:e955::dd6d   <pending>     80:32145/TCP   9m34s

I haven’t tried IPv6 on AWS, but The Journey to IPv6 on Amazon EKS: Foundation (Part 1) | Containers suggests dual stack should work.

I’m afraid I don’t know why your LoadBalancer is stuck as pending- are there any clues in your AWS Load Balancer Controller logs?

If you’ve got an AWS support contract it might be worth asking AWS for help? I think it’s enough to share just the proxy-public service definition with them since all we care about for now is the creation of the load balancer, I don’t think the rest of the Z2JH stack matters for this.

1 Like

So Im assuming the LB is stuck at pending as there is no Public or Internal Load Balancers in place, There is an istio gateway virtual service but ive tried also to use k port-forward svc to hub to bypass and get the same

Do you think loopback issue is related to loadbalancer ?

The pods should talk across the internal pod names and addresses im assuming from logs and not have to go out through the LB public address

The absence of the loadbalancer shouldn’t affect the hub and singleuser pods, since as you say all communication should be internal to the cluster.

I’ve managed to spin up an IPv6 only K3s node with this K3S configuration:

# https://docs.k3s.io/installation/configuration#configuration-file
write-kubeconfig-mode: "0644"

cluster-init: true

# https://docs.k3s.io/networking/basic-network-options?cni=Canal#single-stack-ipv6-networking

cluster-cidr: <PREFIX>::/96
service-cidr: <PREFIX>:0001::/112

kube-controller-manager-arg:
  - node-cidr-mask-size-ipv6=96

node-name: <hostname>

# Ubuntu /etc/resolve.conf contains an internal resolver 127.0.0.53
# And may be missing an ipv6 server
kubelet-arg:
  - resolv-conf=/home/ubuntu/resolv-ipv6.conf

resolv-ipv6.conf

# https://one.one.one.one/dns/
nameserver 2606:4700:4700::1111
nameserver 2606:4700:4700::1001

and this working Z2JH config:

# helm upgrade --install jh --repo=https://hub.jupyter.org/helm-chart/ jupyterhub --version=4.0.0 -f z2jh-config.yaml --wait

custom:
  netpol: &netpol
    networkPolicy:
      enabled: false

hub:
  config:
    JupyterHub:
      default_url: /hub/home
      cleanup_servers: true
    KubeSpawner:
      ip: "[::]"
  <<: *netpol

proxy:
  service:
    type: ClusterIP
  chp:
    <<: *netpol

scheduling:
  userScheduler:
    enabled: false
  userPlaceholder:
    enabled: false

singleuser:
  image:
    name: quay.io/jupyter/base-notebook
    tag: latest
  <<: *netpol

ingress:
  enabled: true
  hosts:
    - <external-k8s-ingress-domain>

debug:
  enabled: true
1 Like

Tried exact same conf using internal dns name for hosts: value getting the same error for loopback for both internal dns address and if forward svc to localhost.

I may try K3s config setup myself also

You mentioned an “istio gateway virtual service” earlier, is istio used for anything else related to networking?

1 Like

dont believe within the same namespace it routes internal pod traffic, could it be network policy blocking ? if it was would assume no loopback.

Looked back into this and tried again with some additional configs to istio ingress and paths for /user and /hub to no avail, is there something else can add to debug to try trace the loopback ?

[I 2025-01-28 14:13:47.124 JupyterHub log:192] 302 GET /user/admin/ -> /hub/user/admin/ (@2600:1f18:96c:2f03::1204) 0.54ms
[I 2025-01-28 14:13:47.232 JupyterHub log:192] 302 GET /hub/user/admin/ -> /user/admin/?redirects=1 (admin@2600:1f18:96c:2f03::1204) 4.72ms
[I 2025-01-28 14:13:47.333 JupyterHub log:192] 302 GET /user/admin/?redirects=1 -> /hub/user/admin/?redirects=1 (@2600:1f18:96c:2f03::1204) 0.62ms
[W 2025-01-28 14:13:47.437 JupyterHub base:1844] Redirect loop detected on /hub/user/admin/?redirects=1
[D 2025-01-28 14:13:48.916 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::fffc) 0.73ms
[I 2025-01-28 14:13:49.438 JupyterHub log:192] 302 GET /hub/user/admin/?redirects=1 -> /user/admin/?redirects=2 (admin@2600:1f18:96c:2f03::1204) 2005.44ms
[I 2025-01-28 14:13:49.603 JupyterHub log:192] 302 GET /user/admin/?redirects=2 -> /hub/user/admin/?redirects=2 (@2600:1f18:96c:2f03::1204) 0.55ms
[W 2025-01-28 14:13:49.715 JupyterHub base:1844] Redirect loop detected on /hub/user/admin/?redirects=2
[D 2025-01-28 14:13:50.916 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::fffc) 0.65ms

port forward in kubernetes also giving me the same results.

apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/target: internal-k8s-private-123456778998876766.us-east-1.elb.amazonaws.com
  name: data-platform-jupyterhub
  namespace: data-platform-jupyterhub
spec:
  gateways:
  - istio-system/gateway-private
  hosts:
  - data-platform-jupyterhub.dev.net
  - data-platform-jupyterhub.dev.net
  http:
  - match:
    - uri:
        prefix: /hub # Match the /hub path for JupyterHub
    route:
    - destination:
        host: hub.data-platform-jupyterhub.svc.cluster.local
        port:
          number: 8081
  - match:
    - uri:
        prefix: /user # Match the /user path to route user traffic to the hub
    route:
    - destination:
        host: hub.data-platform-jupyterhub.svc.cluster.local
        port:
          number: 8081
  - match:
    - uri:
        prefix: /api # API routes for JupyterHub
    route:
    - destination:
        host: hub.data-platform-jupyterhub.svc.cluster.local
        port:
          number: 8081
  - match:
    - uri:
        prefix: /static # Static assets (CSS, JS, etc.)
    route:
    - destination:
        host: hub.data-platform-jupyterhub.svc.cluster.local
        port:
          number: 8081
  - match:
    - uri:
        prefix: /login # Login route for JupyterHub
    route:
    - destination:
        host: hub.data-platform-jupyterhub.svc.cluster.local
        port:
          number: 8081
  # Additional route for proxy-api service (if needed, for API management)
  - match:
    - uri:
        prefix: /proxy-api # Routing traffic to the proxy-api service
    route:
    - destination:
        host: proxy-api.data-platform-jupyterhub.svc.cluster.local
        port:
          number: 8001
  # Route for metrics endpoint, if needed
  - match:
    - uri:
        prefix: /metrics # Ensure that Prometheus metrics are routed correctly
    route:
    - destination:
        host: hub.data-platform-jupyterhub.svc.cluster.local
        port:
          number: 8081

Have you disabled all Z2JH NetworkPolicies? Can you share your current Z2JH config?

Can you also try bypassing the load-balancer or ingress by configuring a NodePort and connecting directly to the nodes - you might need to modify your AWS security groups to allow that.

1 Like

by Z2JH you mean the values.yaml config supplied with helm deployment correct?

Yes- the config YAML file you pass when running Helm to override the default values.

1 Like
# fullnameOverride and nameOverride distinguishes blank strings, null values,
# and non-blank strings. For more details, see the configuration reference.
fullnameOverride: ""
nameOverride:

# enabled is ignored by the jupyterhub chart itself, but a chart depending on
# the jupyterhub chart conditionally can make use this config option as the
# condition.
enabled:

# custom can contain anything you want to pass to the hub pod, as all passed
# Helm template values will be made available there.
custom: {}

# imagePullSecret is configuration to create a k8s Secret that Helm chart's pods
# can get credentials from to pull their images.
imagePullSecret:
  create: false
  automaticReferenceInjection: true
  registry:
  username:
  password:
  email:
    # imagePullSecrets is configuration to reference the k8s Secret resources the
    # Helm chart's pods can get credentials from to pull their images.
imagePullSecrets: []

# hub relates to the hub pod, responsible for running JupyterHub, its configured
# Authenticator class KubeSpawner, and its configured Proxy class
# ConfigurableHTTPProxy. KubeSpawner creates the user pods, and
# ConfigurableHTTPProxy speaks with the actual ConfigurableHTTPProxy server in
# the proxy pod.
hub:
  revisionHistoryLimit:
  config:
    JupyterHub:
      admin_access: true
      authenticator_class: dummy
      default_url: /hub/home
      cleanup_servers: true
    KubeSpawner:
      ip: "[::]"
    Spawner:
      http_timeout: 300 # Default is 30 seconds
      start_timeout: 360 # Default is 60 seconds
  service:
    type: ClusterIP
    annotations: {}
    ports:
      nodePort:
      appProtocol:
    extraPorts: []
    loadBalancerIP:
  baseUrl: /hub
  cookieSecret:
  initContainers: []
  nodeSelector:
    kubernetes.io/arch: amd64
  tolerations: []
  concurrentSpawnLimit: 64
  consecutiveFailureLimit: 5
  activeServerLimit:
  deploymentStrategy:
    ## type: Recreate
    ## - sqlite-pvc backed hubs require the Recreate deployment strategy as a
    ##   typical PVC storage can only be bound to one pod at the time.
    ## - JupyterHub isn't designed to support being run in parallell. More work
    ##   needs to be done in JupyterHub itself for a fully highly available (HA)
    ##   deployment of JupyterHub on k8s is to be possible.
    type: Recreate
  db:
    type: sqlite-pvc
    upgrade:
    pvc:
      annotations: {}
      selector: {}
      accessModes:
      - ReadWriteOnce
      storage: 1Gi
      subPath:
      storageClassName:
    url:
    password:
  labels: {}
  annotations: {}
  command: []
  args: []
  extraConfig: {}
  extraFiles: {}
  extraEnv: {}
  extraContainers: []
  extraVolumes: []
  extraVolumeMounts: []
  image:
    name: quay.io/jupyterhub/k8s-hub
    tag: "4.0.0"
    pullPolicy:
    pullSecrets: []
  resources: {}
  podSecurityContext:
    runAsNonRoot: true
    fsGroup: 1000
    seccompProfile:
      type: "RuntimeDefault"
  containerSecurityContext:
    runAsUser: 1000
    runAsGroup: 1000
    allowPrivilegeEscalation: false
    capabilities:
      drop: [ "ALL" ]
  lifecycle: {}
  loadRoles: {}
  services: {}
  pdb:
    enabled: false
    maxUnavailable:
    minAvailable: 1
  networkPolicy:
    enabled: true
    ingress: []
    egress: []
    egressAllowRules:
      cloudMetadataServer: true
      dnsPortsCloudMetadataServer: true
      dnsPortsKubeSystemNamespace: true
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: true
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  allowNamedServers: false
  namedServerLimitPerUser:
  authenticatePrometheus:
  redirectToServer:
  shutdownOnLogout:
  templatePaths: []
  templateVars: {}
  livenessProbe:
    # The livenessProbe's aim to give JupyterHub sufficient time to startup but
    # be able to restart if it becomes unresponsive for ~5 min.
    enabled: true
    initialDelaySeconds: 300
    periodSeconds: 10
    failureThreshold: 30
    timeoutSeconds: 3
  readinessProbe:
    # The readinessProbe's aim is to provide a successful startup indication,
    # but following that never become unready before its livenessProbe fail and
    # restarts it if needed. To become unready following startup serves no
    # purpose as there are no other pod to fallback to in our non-HA deployment.
    enabled: true
    initialDelaySeconds: 0
    periodSeconds: 2
    failureThreshold: 1000
    timeoutSeconds: 1
  existingSecret:
  serviceAccount:
    create: true
    name:
    annotations: {}
  extraPodSpec: {}

rbac:
  create: true

# proxy relates to the proxy pod, the proxy-public service, and the autohttps
# pod and proxy-http service.
proxy:
  secretToken:
  annotations: {}
  deploymentStrategy:
    ## type: Recreate
    ## - JupyterHub's interaction with the CHP proxy becomes a lot more robust
    ##   with this configuration. To understand this, consider that JupyterHub
    ##   during startup will interact a lot with the k8s service to reach a
    ##   ready proxy pod. If the hub pod during a helm upgrade is restarting
    ##   directly while the proxy pod is making a rolling upgrade, the hub pod
    ##   could end up running a sequence of interactions with the old proxy pod
    ##   and finishing up the sequence of interactions with the new proxy pod.
    ##   As CHP proxy pods carry individual state this is very error prone. One
    ##   outcome when not using Recreate as a strategy has been that user pods
    ##   have been deleted by the hub pod because it considered them unreachable
    ##   as it only configured the old proxy pod but not the new before trying
    ##   to reach them.
    type: Recreate
    ## rollingUpdate:
    ## - WARNING:
    ##   This is required to be set explicitly blank! Without it being
    ##   explicitly blank, k8s will let eventual old values under rollingUpdate
    ##   remain and then the Deployment becomes invalid and a helm upgrade would
    ##   fail with an error like this:
    ##
    ##     UPGRADE FAILED
    ##     Error: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    ##     Error: UPGRADE FAILED: Deployment.apps "proxy" is invalid: spec.strategy.rollingUpdate: Forbidden: may not be specified when strategy `type` is 'Recreate'
    rollingUpdate: # service relates to the proxy-public service
  service:
    type: ClusterIP
    labels: {}
    annotations: {}
    nodePorts:
      http:
      https:
    disableHttpPort: false
    extraPorts: []
    loadBalancerIP:
    loadBalancerSourceRanges: []
  # chp relates to the proxy pod, which is responsible for routing traffic based
  # on dynamic configuration sent from JupyterHub to CHP's REST API.
  chp:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ "ALL" ]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: quay.io/jupyterhub/configurable-http-proxy
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      tag: "4.6.2" # https://github.com/jupyterhub/configurable-http-proxy/tags
      pullPolicy:
      pullSecrets: []
    extraCommandLineFlags: []
    livenessProbe:
      enabled: true
      initialDelaySeconds: 60
      periodSeconds: 10
      failureThreshold: 30
      timeoutSeconds: 3
    readinessProbe:
      enabled: true
      initialDelaySeconds: 0
      periodSeconds: 2
      failureThreshold: 1000
      timeoutSeconds: 1
    resources: {}
    defaultTarget:
    errorTarget:
    extraEnv: {}
    nodeSelector:
      kubernetes.io/arch: amd64
    tolerations: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: true
        dnsPortsCloudMetadataServer: true
        dnsPortsKubeSystemNamespace: true
        dnsPortsPrivateIPs: true
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [ http, https ]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    extraPodSpec: {}
  # traefik relates to the autohttps pod, which is responsible for TLS
  # termination when proxy.https.type=letsencrypt.
  traefik:
    revisionHistoryLimit:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ "ALL" ]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: traefik
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      tag: "v3.2.0" # ref: https://hub.docker.com/_/traefik?tab=tags
      pullPolicy:
      pullSecrets: []
    hsts:
      includeSubdomains: false
      preload: false
      maxAge: 15724800 # About 6 months
    resources: {}
    labels: {}
    extraInitContainers: []
    extraEnv: {}
    extraVolumes: []
    extraVolumeMounts: []
    extraStaticConfig: {}
    extraDynamicConfig: {}
    nodeSelector:
      kubernetes.io/arch: amd64
    tolerations: []
    extraPorts: []
    networkPolicy:
      enabled: true
      ingress: []
      egress: []
      egressAllowRules:
        cloudMetadataServer: true
        dnsPortsCloudMetadataServer: true
        dnsPortsKubeSystemNamespace: true
        dnsPortsPrivateIPs: true
        nonPrivateIPs: true
        privateIPs: true
      interNamespaceAccessLabels: ignore
      allowedIngressPorts: [ http, https ]
    pdb:
      enabled: false
      maxUnavailable:
      minAvailable: 1
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  secretSync:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ "ALL" ]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: quay.io/jupyterhub/k8s-secret-sync
      tag: "4.0.0"
      pullPolicy:
      pullSecrets: []
    resources: {}
  labels: {}
  https:
    enabled: false
    type: letsencrypt
    #type: letsencrypt, manual, offload, secret
    letsencrypt:
      contactEmail: # Specify custom server here (https://acme-staging-v02.api.letsencrypt.org/directory) to hit staging LE

      acmeServer: https://acme-v02.api.letsencrypt.org/directory
    manual:
      key:
      cert:
    secret:
      name:
      key: tls.key
      crt: tls.crt
    hosts: []

# singleuser relates to the configuration of KubeSpawner which runs in the hub
# pod, and its spawning of user pods such as jupyter-myusername.
singleuser:
  podNameTemplate:
  extraTolerations: []
  nodeSelector:
    kubernetes.io/arch: amd64
  extraNodeAffinity:
    required: []
    preferred: []
  extraPodAffinity:
    required: []
    preferred: []
  extraPodAntiAffinity:
    required: []
    preferred: []
  networkTools:
    image:
      name: quay.io/jupyterhub/k8s-network-tools
      tag: "4.0.0"
      pullPolicy:
      pullSecrets: []
    resources: {}
  cloudMetadata:
    # block set to true will append a privileged initContainer using the
    # iptables to block the sensitive metadata server at the provided ip.
    blockWithIptables: true
    ip: 169.254.169.254
  networkPolicy:
    enabled: true
    ingress:
    - from:
      - podSelector: {}
      ports:
      - protocol: TCP
        port: 8888
    egress: []
    egressAllowRules:
      cloudMetadataServer: false
      dnsPortsCloudMetadataServer: true
      dnsPortsKubeSystemNamespace: true
      dnsPortsPrivateIPs: true
      nonPrivateIPs: true
      privateIPs: false
    interNamespaceAccessLabels: ignore
    allowedIngressPorts: []
  events: true
  extraAnnotations: {}
  extraLabels:
    hub.jupyter.org/network-access-hub: "true"
  extraFiles: {}
  extraEnv:
    NOTEBOOK_ARGS: "--ip=[::] --port=8888"
  lifecycleHooks: {}
  initContainers: []
  extraContainers: []
  allowPrivilegeEscalation: false
  uid: 1000
  fsGid: 100
  serviceAccountName:
  storage:
    type: dynamic
    extraLabels: {}
    extraVolumes: []
    extraVolumeMounts: []
    static:
      pvcName:
      subPath: "{username}"
    capacity: 10Gi
    homeMountPath: /home/jovyan
    dynamic:
      storageClass:
      pvcNameTemplate:
      volumeNameTemplate: volume-{user_server}
      storageAccessModes: [ ReadWriteOnce ]
      subPath:
  image:
    name: quay.io/jupyterhub/k8s-singleuser-sample
    tag: "4.0.0"
    pullPolicy:
    pullSecrets: []
  startTimeout: 300
  cpu:
    limit:
    guarantee:
  memory:
    limit:
    guarantee: 1G
  extraResource:
    limits: {}
    guarantees: {}
  cmd: [ "jupyterhub-singleuser", "--ip='::'", "--port=8888" ]
  defaultUrl:
  extraPodConfig: {}
  profileList: []

# scheduling relates to the user-scheduler pods and user-placeholder pods.
scheduling:
  userScheduler:
    enabled: true
    revisionHistoryLimit:
    replicas: 2
    logLevel: 4
    # plugins are configured on the user-scheduler to make us score how we
    # schedule user pods in a way to help us schedule on the most busy node. By
    # doing this, we help scale down more effectively. It isn't obvious how to
    # enable/disable scoring plugins, and configure them, to accomplish this.
    #
    # plugins ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins-1
    # migration ref: https://kubernetes.io/docs/reference/scheduling/config/#scheduler-configuration-migrations
    #
    plugins:
      score:
        # We make use of the default scoring plugins, but we re-enable some with
        # a new priority, leave some enabled with their lower default priority,
        # and disable some.
        #
        # Below are the default scoring plugins as of 2024-09-23 according to
        # https://kubernetes.io/docs/reference/scheduling/config/#scheduling-plugins.
        #
        # Re-enabled with high priority:
        # - NodeAffinity
        # - InterPodAffinity
        # - NodeResourcesFit
        # - ImageLocality
        #
        # Remains enabled with low default priority:
        # - TaintToleration
        # - PodTopologySpread
        # - VolumeBinding
        #
        # Disabled for scoring:
        # - NodeResourcesBalancedAllocation
        #
        disabled:
        # We disable these plugins (with regards to scoring) to not interfere
        # or complicate our use of NodeResourcesFit.
        - name: NodeResourcesBalancedAllocation
        # Disable plugins to be allowed to enable them again with a different
        # weight and avoid an error.
        - name: NodeAffinity
        - name: InterPodAffinity
        - name: NodeResourcesFit
        - name: ImageLocality
        enabled:
        - name: NodeAffinity
          weight: 14631
        - name: InterPodAffinity
          weight: 1331
        - name: NodeResourcesFit
          weight: 121
        - name: ImageLocality
          weight: 11
    pluginConfig:
    # Here we declare that we should optimize pods to fit based on a
    # MostAllocated strategy instead of the default LeastAllocated.
    - name: NodeResourcesFit
      args:
        scoringStrategy:
          type: MostAllocated
          resources:
          - name: cpu
            weight: 1
          - name: memory
            weight: 1
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ "ALL" ]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      # IMPORTANT: Bumping the minor version of this binary should go hand in
      #            hand with an inspection of the user-scheduelr's RBAC
      #            resources that we have forked in
      #            templates/scheduling/user-scheduler/rbac.yaml.
      #
      #            Debugging advice:
      #
      #            - Is configuration of kube-scheduler broken in
      #              templates/scheduling/user-scheduler/configmap.yaml?
      #
      #            - Is the kube-scheduler binary's compatibility to work
      #              against a k8s api-server that is too new or too old?
      #
      #            - You can update the GitHub workflow that runs tests to
      #              include "deploy/user-scheduler" in the k8s namespace report
      #              and reduce the user-scheduler deployments replicas to 1 in
      #              dev-config.yaml to get relevant logs from the user-scheduler
      #              pods. Inspect the "Kubernetes namespace report" action!
      #
      #            - Typical failures are that kube-scheduler fails to search for
      #              resources via its "informers", and won't start trying to
      #              schedule pods before they succeed which may require
      #              additional RBAC permissions or that the k8s api-server is
      #              aware of the resources.
      #
      #            - If "successfully acquired lease" can be seen in the logs, it
      #              is a good sign kube-scheduler is ready to schedule pods.
      #
      name: registry.k8s.io/kube-scheduler
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow. The minor version is pinned in the
      # workflow, and should be updated there if a minor version bump is done
      # here. We aim to stay around 1 minor version behind the latest k8s
      # version.
      #
      tag: "v1.30.6" # ref: https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
      pullPolicy:
      pullSecrets: []
    nodeSelector:
      kubernetes.io/arch: amd64
    tolerations: []
    labels: {}
    annotations: {}
    pdb:
      enabled: true
      maxUnavailable: 1
      minAvailable:
    resources: {}
    serviceAccount:
      create: true
      name:
      annotations: {}
    extraPodSpec: {}
  podPriority:
    enabled: false
    globalDefault: false
    defaultPriority: 0
    imagePullerPriority: -5
    userPlaceholderPriority: -10
  userPlaceholder:
    enabled: true
    image:
      name: registry.k8s.io/pause
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      # If you update this, also update prePuller.pause.image.tag
      #
      tag: "3.10"
      pullPolicy:
      pullSecrets: []
    revisionHistoryLimit:
    replicas: 0
    labels: {}
    annotations: {}
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ "ALL" ]
      seccompProfile:
        type: "RuntimeDefault"
    resources: {}
  corePods:
    tolerations:
    - key: hub.jupyter.org/dedicated
      operator: Equal
      value: core
      effect: NoSchedule
    - key: hub.jupyter.org_dedicated
      operator: Equal
      value: core
      effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer
  userPods:
    tolerations:
    - key: hub.jupyter.org/dedicated
      operator: Equal
      value: user
      effect: NoSchedule
    - key: hub.jupyter.org_dedicated
      operator: Equal
      value: user
      effect: NoSchedule
    nodeAffinity:
      matchNodePurpose: prefer

# prePuller relates to the hook|continuous-image-puller DaemonsSets
prePuller:
  revisionHistoryLimit:
  labels: {}
  annotations: {}
  resources: {}
  containerSecurityContext:
    runAsNonRoot: true
    runAsUser: 65534 # nobody user
    runAsGroup: 65534 # nobody group
    allowPrivilegeEscalation: false
    capabilities:
      drop: [ "ALL" ]
    seccompProfile:
      type: "RuntimeDefault"
  extraTolerations: []
  # hook relates to the hook-image-awaiter Job and hook-image-puller DaemonSet
  hook:
    enabled: false
    pullOnlyOnChanges: true
    # image and the configuration below relates to the hook-image-awaiter Job
    image:
      name: quay.io/jupyterhub/k8s-image-awaiter
      tag: "4.0.0"
      pullPolicy:
      pullSecrets: []
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ "ALL" ]
      seccompProfile:
        type: "RuntimeDefault"
    podSchedulingWaitDuration: 10
    nodeSelector:
      kubernetes.io/arch: amd64
    tolerations: []
    resources: {}
    serviceAccount:
      create: true
      name:
      annotations: {}
  continuous:
    enabled: false
  pullProfileListImages: true
  extraImages: {}
  pause:
    containerSecurityContext:
      runAsNonRoot: true
      runAsUser: 65534 # nobody user
      runAsGroup: 65534 # nobody group
      allowPrivilegeEscalation: false
      capabilities:
        drop: [ "ALL" ]
      seccompProfile:
        type: "RuntimeDefault"
    image:
      name: registry.k8s.io/pause
      # tag is automatically bumped to new patch versions by the
      # watch-dependencies.yaml workflow.
      #
      # If you update this, also update scheduling.userPlaceholder.image.tag
      #
      tag: "3.10"
      pullPolicy:
      pullSecrets: []

ingress:
  enabled: false
  annotations: {}
  ingressClassName:
  hosts:
  - data-platform-jupyterhub.k8s.dev.dev.net
  pathSuffix:
  pathType: Prefix
  tls: []
  extraPaths: []

# cull relates to the jupyterhub-idle-culler service, responsible for evicting
# inactive singleuser pods.
#
# The configuration below, except for enabled, corresponds to command-line flags
# for jupyterhub-idle-culler as documented here:
# https://github.com/jupyterhub/jupyterhub-idle-culler#as-a-standalone-script
#
cull:
  enabled: true
  users: false # --cull-users
  adminUsers: true # --cull-admin-users
  removeNamedServers: false # --remove-named-servers
  timeout: 3600 # --timeout
  every: 600 # --cull-every
  concurrency: 10 # --concurrency
  maxAge: 0 # --max-age

debug:
  enabled: true

global:
  safeToShowValues: true

Can you try:

  • disabling all network policies (set hub.networkPolicy.enabled, proxy.chp.networkPolicy.enabled, singleuser.networkPolicy.enabled to false)
  • Removing hub.baseUrl so the default / is used
  • Remove extraEnv.NOTEBOOK_ARGS
  • Remove the inner quotes in "--ip='::'" in singleuser.cmd
1 Like

I had forgotten added some of those, still getting loopback after updating values and removing policies, tried complete helm uninstall and retried with updated config also and still same redirect loop.

latest log below

[D 2025-02-04 10:54:01.436 JupyterHub application:929] Loaded config file: /usr/local/etc/jupyterhub/jupyterhub_config.py
[I 2025-02-04 10:54:01.459 JupyterHub app:3346] Running JupyterHub version 5.2.1
[I 2025-02-04 10:54:01.459 JupyterHub app:3376] Using Authenticator: jupyterhub.auth.DummyAuthenticator-5.2.1
[I 2025-02-04 10:54:01.459 JupyterHub app:3376] Using Spawner: kubespawner.spawner.KubeSpawner-7.0.0
[I 2025-02-04 10:54:01.459 JupyterHub app:3376] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-5.2.1
[D 2025-02-04 10:54:01.461 JupyterHub app:1998] Connecting to db: sqlite:///jupyterhub.sqlite
[D 2025-02-04 10:54:01.483 JupyterHub orm:1509] database schema version found: 4621fec11365
[D 2025-02-04 10:54:01.488 JupyterHub orm:1509] database schema version found: 4621fec11365
[D 2025-02-04 10:54:01.536 JupyterHub app:2338] Loading roles into database
[D 2025-02-04 10:54:01.537 JupyterHub app:2347] Loading role jupyterhub-idle-culler
[W 2025-02-04 10:54:01.542 JupyterHub auth:1508] Using testing authenticator DummyAuthenticator! This is not meant for production!
[I 2025-02-04 10:54:01.619 JupyterHub app:2919] Creating service jupyterhub-idle-culler without oauth.
[D 2025-02-04 10:54:01.622 JupyterHub app:2685] Purging expired APITokens
[D 2025-02-04 10:54:01.623 JupyterHub app:2685] Purging expired OAuthCodes
[D 2025-02-04 10:54:01.625 JupyterHub app:2685] Purging expired Shares
[D 2025-02-04 10:54:01.626 JupyterHub app:2685] Purging expired ShareCodes
[D 2025-02-04 10:54:01.627 JupyterHub app:2459] Loading role assignments from config
[D 2025-02-04 10:54:01.645 JupyterHub app:2970] Initializing spawners
[D 2025-02-04 10:54:01.651 JupyterHub app:3120] Loaded users:
    
[I 2025-02-04 10:54:01.651 JupyterHub app:3416] Initialized 0 spawners in 0.007 seconds
[I 2025-02-04 10:54:01.656 JupyterHub metrics:373] Found 1 active users in the last ActiveUserPeriods.twenty_four_hours
[I 2025-02-04 10:54:01.656 JupyterHub metrics:373] Found 1 active users in the last ActiveUserPeriods.seven_days
[I 2025-02-04 10:54:01.657 JupyterHub metrics:373] Found 1 active users in the last ActiveUserPeriods.thirty_days
[I 2025-02-04 10:54:01.657 JupyterHub app:3703] Not starting proxy
[D 2025-02-04 10:54:01.657 JupyterHub proxy:925] Proxy: Fetching GET http://proxy-api:8001/api/routes
[D 2025-02-04 10:54:01.663 JupyterHub proxy:996] Omitting non-jupyterhub route '/'
[I 2025-02-04 10:54:01.663 JupyterHub app:3739] Hub API listening on http://:8081/hub/
[I 2025-02-04 10:54:01.663 JupyterHub app:3741] Private Hub API connect url http://hub:8081/hub/
[I 2025-02-04 10:54:01.663 JupyterHub app:3615] Starting managed service jupyterhub-idle-culler
[I 2025-02-04 10:54:01.663 JupyterHub service:423] Starting service 'jupyterhub-idle-culler': ['python3', '-m', 'jupyterhub_idle_culler', '--url=http://localhost:8081/hub/api', '--timeout=3600', '--cull-every=600', '--concurrency=10']
[I 2025-02-04 10:54:01.664 JupyterHub service:136] Spawning python3 -m jupyterhub_idle_culler --url=http://localhost:8081/hub/api --timeout=3600 --cull-every=600 --concurrency=10
[D 2025-02-04 10:54:01.665 JupyterHub spawner:1475] Polling subprocess every 30s
[D 2025-02-04 10:54:01.665 JupyterHub proxy:389] Fetching routes to check
[D 2025-02-04 10:54:01.665 JupyterHub proxy:925] Proxy: Fetching GET http://proxy-api:8001/api/routes
[D 2025-02-04 10:54:01.667 JupyterHub proxy:996] Omitting non-jupyterhub route '/'
[D 2025-02-04 10:54:01.667 JupyterHub proxy:392] Checking routes
[I 2025-02-04 10:54:01.667 JupyterHub proxy:477] Adding route for Hub: / => http://hub:8081
[D 2025-02-04 10:54:01.667 JupyterHub proxy:925] Proxy: Fetching POST http://proxy-api:8001/api/routes/
[I 2025-02-04 10:54:01.670 JupyterHub app:3772] JupyterHub is now running, internal Hub API at http://hub:8081/hub/
[D 2025-02-04 10:54:01.670 JupyterHub app:3339] It took 0.491 seconds for the Hub to start
[D 2025-02-04 10:54:01.823 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.79ms
[D 2025-02-04 10:54:01.831 JupyterHub base:366] Recording first activity for <APIToken('8dd5...', service='jupyterhub-idle-culler', client_id='jupyterhub')>
[I 2025-02-04 10:54:01.839 JupyterHub log:192] 200 GET /hub/api/ (jupyterhub-idle-culler@::1) 9.48ms
[D 2025-02-04 10:54:01.841 JupyterHub scopes:1010] Checking access to /hub/api/users via scope list:users
[I 2025-02-04 10:54:01.851 JupyterHub log:192] 200 GET /hub/api/users?state=[secret] (jupyterhub-idle-culler@::1) 10.63ms
[D 2025-02-04 10:54:02.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.55ms
[D 2025-02-04 10:54:04.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.64ms
[D 2025-02-04 10:54:06.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.52ms
[D 2025-02-04 10:54:08.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.53ms
[D 2025-02-04 10:54:10.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.58ms
[D 2025-02-04 10:54:12.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.61ms
[D 2025-02-04 10:54:14.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.52ms
[D 2025-02-04 10:54:16.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.51ms
[D 2025-02-04 10:54:17.310 JupyterHub base:411] Refreshing auth for admin
[D 2025-02-04 10:54:17.310 JupyterHub user:496] Creating <class 'kubespawner.spawner.KubeSpawner'> for admin:
[D 2025-02-04 10:54:17.313 JupyterHub _xsrf_utils:155] xsrf id mismatch b'FMymA-KlJU6vZuXNGWrRUMLSyPAdjXEygFUeIo06pmI=:fb91b22d09964b50bd50940a8a67e209' != b'atF6fQPnHnry_DJzHucYWOfbRjByN5umxU5XZSmD49k=:fb91b22d09964b50bd50940a8a67e209'
[I 2025-02-04 10:54:17.313 JupyterHub _xsrf_utils:125] Setting new xsrf cookie for b'atF6fQPnHnry_DJzHucYWOfbRjByN5umxU5XZSmD49k=:fb91b22d09964b50bd50940a8a67e209' {'path': '/hub/'}
[I 2025-02-04 10:54:17.338 JupyterHub log:192] 200 GET /hub/home (admin@2600:1f18:96c:2f04::d6fc) 41.57ms
[D 2025-02-04 10:54:17.508 JupyterHub log:192] 200 GET /hub/static/js/home.js?v=20250204105401 (@2600:1f18:96c:2f04::d6fc) 0.97ms
[D 2025-02-04 10:54:17.633 JupyterHub log:192] 200 GET /hub/static/favicon.ico?v=fde5757cd3892b979919d3b1faa88a410f28829feb5ba22b6cf069f2c6c98675fceef90f932e49b510e74d65c681d5846b943e7f7cc1b41867422f0481085c1f (@2600:1f18:96c:2f04::d6fc) 0.71ms
[D 2025-02-04 10:54:17.634 JupyterHub log:192] 200 GET /hub/static/js/jhapi.js?v=20250204105401 (@2600:1f18:96c:2f04::d6fc) 0.53ms
[D 2025-02-04 10:54:17.634 JupyterHub log:192] 200 GET /hub/static/components/moment/moment.js?v=20250204105401 (@2600:1f18:96c:2f04::d6fc) 0.80ms
[D 2025-02-04 10:54:17.757 JupyterHub log:192] 200 GET /hub/static/js/utils.js?v=20250204105401 (@2600:1f18:96c:2f04::d6fc) 0.65ms
[D 2025-02-04 10:54:18.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.57ms
[D 2025-02-04 10:54:18.904 JupyterHub log:192] 304 GET /hub/home (admin@2600:1f18:96c:2f04::d6fc) 4.58ms
[D 2025-02-04 10:54:20.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.52ms
[D 2025-02-04 10:54:22.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.54ms
[D 2025-02-04 10:54:24.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.53ms
[D 2025-02-04 10:54:26.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.57ms
[D 2025-02-04 10:54:28.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.53ms
[D 2025-02-04 10:54:30.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.57ms
[D 2025-02-04 10:54:32.083 JupyterHub scopes:1010] Checking access to /hub/spawn/admin via scope servers!server=admin/
[D 2025-02-04 10:54:32.083 JupyterHub pages:216] Triggering spawn with default options for admin
[D 2025-02-04 10:54:32.083 JupyterHub base:1095] Initiating spawn for admin
[D 2025-02-04 10:54:32.083 JupyterHub base:1099] 0/64 concurrent spawns
[D 2025-02-04 10:54:32.083 JupyterHub base:1104] 0 active servers
[I 2025-02-04 10:54:32.106 JupyterHub provider:661] Creating oauth client jupyterhub-user-admin
[D 2025-02-04 10:54:32.127 JupyterHub user:913] Calling Spawner.start for admin
[I 2025-02-04 10:54:32.130 JupyterHub log:192] 302 GET /hub/spawn/admin -> /hub/spawn-pending/admin (admin@2600:1f18:96c:2f04::d6fc) 50.30ms
[I 2025-02-04 10:54:32.140 JupyterHub reflector:297] watching for pods with label selector='component=singleuser-server' in namespace data-platform-jupyterhub
[D 2025-02-04 10:54:32.140 JupyterHub reflector:304] Connecting pods watcher
[I 2025-02-04 10:54:32.142 JupyterHub reflector:297] watching for events with field selector='involvedObject.kind=Pod' in namespace data-platform-jupyterhub
[D 2025-02-04 10:54:32.142 JupyterHub reflector:304] Connecting events watcher
[I 2025-02-04 10:54:32.144 JupyterHub spawner:2931] Attempting to create pvc claim-admin, with timeout 3
[I 2025-02-04 10:54:32.165 JupyterHub spawner:2947] PVC claim-admin already exists, so did not create new pvc.
[I 2025-02-04 10:54:32.166 JupyterHub spawner:2890] Attempting to create pod jupyter-admin, with timeout 3
[D 2025-02-04 10:54:32.253 JupyterHub scopes:1010] Checking access to /hub/spawn-pending/admin via scope servers!server=admin/
[I 2025-02-04 10:54:32.253 JupyterHub pages:397] admin is pending spawn
[I 2025-02-04 10:54:32.256 JupyterHub log:192] 200 GET /hub/spawn-pending/admin (admin@2600:1f18:96c:2f04::d6fc) 7.10ms
[D 2025-02-04 10:54:32.436 JupyterHub scopes:1010] Checking access to /hub/api/users/admin/server/progress via scope read:servers!server=admin/
[D 2025-02-04 10:54:32.439 JupyterHub spawner:2672] progress generator: jupyter-admin
[D 2025-02-04 10:54:32.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.59ms
[D 2025-02-04 10:54:34.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.51ms
[D 2025-02-04 10:54:36.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.54ms
[D 2025-02-04 10:54:38.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.57ms
[D 2025-02-04 10:54:40.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.58ms
[D 2025-02-04 10:54:42.149 JupyterHub reflector:390] pods watcher timeout
[D 2025-02-04 10:54:42.149 JupyterHub reflector:304] Connecting pods watcher
[D 2025-02-04 10:54:42.151 JupyterHub reflector:390] events watcher timeout
[D 2025-02-04 10:54:42.151 JupyterHub reflector:304] Connecting events watcher
[D 2025-02-04 10:54:42.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.59ms
[D 2025-02-04 10:54:44.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.51ms
[D 2025-02-04 10:54:46.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.52ms
[D 2025-02-04 10:54:48.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.50ms
[D 2025-02-04 10:54:50.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.55ms
[D 2025-02-04 10:54:52.156 JupyterHub reflector:390] pods watcher timeout
[D 2025-02-04 10:54:52.156 JupyterHub reflector:304] Connecting pods watcher
[D 2025-02-04 10:54:52.159 JupyterHub reflector:390] events watcher timeout
[D 2025-02-04 10:54:52.159 JupyterHub reflector:304] Connecting events watcher
[D 2025-02-04 10:54:52.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.51ms
[D 2025-02-04 10:54:54.807 JupyterHub spawner:3254] pod data-platform-jupyterhub/jupyter-admin events before launch: 2025-02-04T10:54:34.187307Z [Normal] Successfully assigned data-platform-jupyterhub/jupyter-admin to ip-10-82-113-232.ec2.internal
    2025-02-04T10:54:36Z [Normal] AttachVolume.Attach succeeded for volume "pvc-a1dad947-d8fe-4b56-955b-2a5f9c039346" 
    2025-02-04T10:54:46Z [Normal] Pulling image "docker.internal.net/jupyterhub/k8s-network-tools:4.0.0"
    2025-02-04T10:54:48Z [Normal] Successfully pulled image "docker.internal.net/jupyterhub/k8s-network-tools:4.0.0" in 1.283s (1.283s including waiting)
    2025-02-04T10:54:48Z [Normal] Created container block-cloud-metadata
    2025-02-04T10:54:48Z [Normal] Started container block-cloud-metadata
    2025-02-04T10:54:48Z [Normal] Pulling image "docker.internal.net/jupyterhub/k8s-singleuser-sample:4.0.0"
    2025-02-04T10:54:53Z [Normal] Successfully pulled image "docker.internal.net/jupyterhub/k8s-singleuser-sample:4.0.0" in 5.132s (5.132s including waiting)
    2025-02-04T10:54:53Z [Normal] Created container notebook
    2025-02-04T10:54:53Z [Normal] Started container notebook
[D 2025-02-04 10:54:54.814 JupyterHub spawner:1475] Polling subprocess every 30s
[D 2025-02-04 10:54:54.815 JupyterHub utils:292] Waiting 300s for server at http://[2600:1f18:96c:2f03:8acd::b]:8888/user/admin/api
[D 2025-02-04 10:54:54.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.50ms
[I 2025-02-04 10:54:55.142 JupyterHub log:192] 200 GET /hub/api (@2600:1f18:96c:2f03:8acd::b) 0.56ms
[D 2025-02-04 10:54:55.192 JupyterHub base:366] Recording first activity for <APIToken('18a6...', user='admin', client_id='jupyterhub')>
[D 2025-02-04 10:54:55.201 JupyterHub scopes:1010] Checking access to /hub/api/users/admin/activity via scope users:activity!user=admin
[D 2025-02-04 10:54:55.204 JupyterHub users:1006] Activity for user admin: 2025-02-04T10:54:55.126170Z
[D 2025-02-04 10:54:55.204 JupyterHub users:1024] Activity on server admin/: 2025-02-04T10:54:55.126170Z
[I 2025-02-04 10:54:55.212 JupyterHub log:192] 200 POST /hub/api/users/admin/activity (admin@2600:1f18:96c:2f03:8acd::b) 21.87ms
[D 2025-02-04 10:54:55.676 JupyterHub utils:328] Server at http://[2600:1f18:96c:2f03:8acd::b]:8888/user/admin/api responded in 0.86s
[D 2025-02-04 10:54:55.676 JupyterHub _version:73] jupyterhub and jupyterhub-singleuser both on version 5.2.1
[I 2025-02-04 10:54:55.676 JupyterHub base:1124] User admin took 23.593 seconds to start
[I 2025-02-04 10:54:55.676 JupyterHub proxy:331] Adding user admin to proxy /user/admin/ => http://[2600:1f18:96c:2f03:8acd::b]:8888
[D 2025-02-04 10:54:55.676 JupyterHub proxy:925] Proxy: Fetching POST http://proxy-api:8001/api/routes/user/admin
[I 2025-02-04 10:54:55.680 JupyterHub users:899] Server admin is ready
[I 2025-02-04 10:54:55.680 JupyterHub log:192] 200 GET /hub/api/users/admin/server/progress?_xsrf=[secret] (admin@2600:1f18:96c:2f04::d6fc) 23247.35ms
[D 2025-02-04 10:54:55.823 JupyterHub scopes:1010] Checking access to /hub/spawn-pending/admin via scope servers!server=admin/
[I 2025-02-04 10:54:55.823 JupyterHub log:192] 302 GET /hub/spawn-pending/admin -> /user/admin/ (admin@2600:1f18:96c:2f04::d6fc) 3.90ms
[I 2025-02-04 10:54:55.951 JupyterHub log:192] 302 GET /user/admin/ -> /hub/user/admin/ (@2600:1f18:96c:2f04::d6fc) 0.57ms
[I 2025-02-04 10:54:56.086 JupyterHub log:192] 302 GET /hub/user/admin/ -> /user/admin/?redirects=1 (admin@2600:1f18:96c:2f04::d6fc) 3.56ms
[I 2025-02-04 10:54:56.212 JupyterHub log:192] 302 GET /user/admin/?redirects=1 -> /hub/user/admin/?redirects=1 (@2600:1f18:96c:2f04::d6fc) 0.61ms
[W 2025-02-04 10:54:56.345 JupyterHub base:1844] Redirect loop detected on /hub/user/admin/?redirects=1
[D 2025-02-04 10:54:56.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.59ms
[I 2025-02-04 10:54:58.346 JupyterHub log:192] 302 GET /hub/user/admin/?redirects=1 -> /user/admin/?redirects=2 (admin@2600:1f18:96c:2f04::d6fc) 2004.57ms
[I 2025-02-04 10:54:58.477 JupyterHub log:192] 302 GET /user/admin/?redirects=2 -> /hub/user/admin/?redirects=2 (@2600:1f18:96c:2f04::d6fc) 0.58ms
[W 2025-02-04 10:54:58.609 JupyterHub base:1844] Redirect loop detected on /hub/user/admin/?redirects=2
[D 2025-02-04 10:54:58.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.54ms
[D 2025-02-04 10:55:00.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.63ms
[D 2025-02-04 10:55:01.672 JupyterHub proxy:925] Proxy: Fetching GET http://proxy-api:8001/api/routes
[D 2025-02-04 10:55:01.690 JupyterHub proxy:392] Checking routes
[D 2025-02-04 10:55:02.164 JupyterHub reflector:390] pods watcher timeout
[D 2025-02-04 10:55:02.164 JupyterHub reflector:304] Connecting pods watcher
[D 2025-02-04 10:55:02.175 JupyterHub reflector:390] events watcher timeout
[D 2025-02-04 10:55:02.175 JupyterHub reflector:304] Connecting events watcher
[I 2025-02-04 10:55:02.612 JupyterHub log:192] 302 GET /hub/user/admin/?redirects=2 -> /user/admin/?redirects=3 (admin@2600:1f18:96c:2f04::d6fc) 4006.33ms
[D 2025-02-04 10:55:02.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.54ms
[I 2025-02-04 10:55:02.962 JupyterHub log:192] 302 GET /user/admin/?redirects=3 -> /hub/user/admin/?redirects=3 (@2600:1f18:96c:2f04::d6fc) 0.58ms
[W 2025-02-04 10:55:03.091 JupyterHub base:1844] Redirect loop detected on /hub/user/admin/?redirects=3
[D 2025-02-04 10:55:04.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.53ms
[D 2025-02-04 10:55:06.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.59ms
[D 2025-02-04 10:55:08.838 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.56ms
[D 2025-02-04 10:55:10.839 JupyterHub log:192] 200 GET /hub/health (@2600:1f18:96c:2f04::80a3) 0.80ms
[I 2025-02-04 10:55:11.092 JupyterHub log:192] 302 GET /hub/user/admin/?redirects=3 -> /user/admin/?redirects=4 (admin@2600:1f18:96c:2f04::d6fc) 8004.64ms
[I 2025-02-04 10:55:11.238 JupyterHub log:192] 302 GET /user/admin/?redirects=4 -> /hub/user/admin/?redirects=4 (@2600:1f18:96c:2f04::d6fc) 0.62ms
[W 2025-02-04 10:55:11.367 JupyterHub web:1873] 500 GET /hub/user/admin/?redirects=4 (2600:1f18:96c:2f04::d6fc): Redirect loop detected.
[D 2025-02-04 10:55:11.367 JupyterHub base:1519] Using default error template for 500```

Can you share your deploy/proxy logs? They should include all external requests and the proxied destination, and updates from the hub.

1 Like

Sure thing, see below

admin pod details also

jupyter-admin                     1/1     Running           0          19s     2600:1f18:96c:2f03:fc8b::7   ip-10-82-95-169.ec2.internal    <none>           <none>
17:45:15.486 [ConfigProxy] info: 200 GET /api/routes
17:46:15.486 [ConfigProxy] info: 200 GET /api/routes
17:46:29.300 [ConfigProxy] info: Adding route /user/admin -> http://[2600:1f18:96c:2f03:fc8b::7]:8888
17:46:29.300 [ConfigProxy] info: Route added /user/admin -> http://[2600:1f18:96c:2f03:fc8b::7]:8888
17:46:29.300 [ConfigProxy] info: 201 POST /api/routes/user/admin
17:47:15.487 [ConfigProxy] info: 200 GET /api/routes