Kubernetes Helm Deployment Unable to Connect to Server Using IPv6

Can you check the new parameters were applied as expected using kubectl get serviced ... -o yaml (or describe).

Assuming it’s as expected I think it’s worth testing with a simpler minimal application, e.g. manually create YAML manifests for an Nginx deployment and service, and see if you can find a configuration that deploys a load balancer. Then we can work out how to get that into Z2JH.

Sure thing, let me check and revert

Just noticed also , EKS cluster does not support IPv4 only ipv6 so dual stack will not work ?

See Below each yaml for each Service now updated, Ignore the Chart Version or Release I just updated to 4.0.1 and packaged with changes to release and test.

hub

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: jupyterhub
    meta.helm.sh/release-namespace: jupyterhub
    prometheus.io/path: /hub/metrics
    prometheus.io/port: "8081"
    prometheus.io/scrape: "true"
  creationTimestamp: "2025-01-07T23:31:08Z"
  labels:
    app: jupyterhub
    app.kubernetes.io/component: hub
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jupyterhub
    chart: jupyterhub-4.0.1
    component: hub
    helm.sh/chart: jupyterhub-4.0.1
    heritage: Helm
    release: jupyterhub
  name: hub
  namespace: jupyterhub
  resourceVersion: "997678035"
  uid: 01d74829-16b0-4518-b523-4a65879acfb5
spec:
  clusterIP: fd5f:c06e:e955::dae
  clusterIPs:
  - fd5f:c06e:e955::dae
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  ports:
  - name: hub
    port: 8081
    protocol: TCP
    targetPort: http
  selector:
    app: jupyterhub
    component: hub
    release: jupyterhub
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

proxy api

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: jupyterhub
    meta.helm.sh/release-namespace: jupyterhub
  creationTimestamp: "2025-01-07T23:31:08Z"
  labels:
    app: jupyterhub
    app.kubernetes.io/component: proxy-api
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jupyterhub
    chart: jupyterhub-4.0.1
    component: proxy-api
    helm.sh/chart: jupyterhub-4.0.1
    heritage: Helm
    release: jupyterhub
  name: proxy-api
  namespace: jupyterhub
  resourceVersion: "997678031"
  uid: 8ff9f5b7-8cea-4e9a-8723-0f191eb19d13
spec:
  clusterIP: fd5f:c06e:e955::fa5b
  clusterIPs:
  - fd5f:c06e:e955::fa5b
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  ports:
  - port: 8001
    protocol: TCP
    targetPort: api
  selector:
    app: jupyterhub
    component: proxy
    release: jupyterhub
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

proxy public

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: jupyterhub
    meta.helm.sh/release-namespace: jupyterhub
    service.beta.kubernetes.io/aws-load-balancer-ipv6-addresses: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  creationTimestamp: "2025-01-07T23:31:08Z"
  labels:
    app: jupyterhub
    app.kubernetes.io/component: proxy-public
    app.kubernetes.io/instance: jupyterhub
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: jupyterhub
    chart: jupyterhub-4.0.1
    component: proxy-public
    helm.sh/chart: jupyterhub-4.0.1
    heritage: Helm
    release: jupyterhub
  name: proxy-public
  namespace: jupyterhub
  resourceVersion: "997678041"
  uid: 3c7ad15b-6d5b-4209-8d39-0d172524629f
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: fd5f:c06e:e955::dd6d
  clusterIPs:
  - fd5f:c06e:e955::dd6d
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv6
  ipFamilyPolicy: SingleStack
  loadBalancerClass: service.k8s.aws/nlb
  ports:
  - name: http
    nodePort: 32145
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: jupyterhub
    component: proxy
    release: jupyterhub
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer: {}
NAME           TYPE           CLUSTER-IP             EXTERNAL-IP   PORT(S)        AGE
hub            ClusterIP      fd5f:c06e:e955::dae    <none>        8081/TCP       9m34s
proxy-api      ClusterIP      fd5f:c06e:e955::fa5b   <none>        8001/TCP       9m34s
proxy-public   LoadBalancer   fd5f:c06e:e955::dd6d   <pending>     80:32145/TCP   9m34s

I haven’t tried IPv6 on AWS, but The Journey to IPv6 on Amazon EKS: Foundation (Part 1) | Containers suggests dual stack should work.

I’m afraid I don’t know why your LoadBalancer is stuck as pending- are there any clues in your AWS Load Balancer Controller logs?

If you’ve got an AWS support contract it might be worth asking AWS for help? I think it’s enough to share just the proxy-public service definition with them since all we care about for now is the creation of the load balancer, I don’t think the rest of the Z2JH stack matters for this.

So Im assuming the LB is stuck at pending as there is no Public or Internal Load Balancers in place, There is an istio gateway virtual service but ive tried also to use k port-forward svc to hub to bypass and get the same

Do you think loopback issue is related to loadbalancer ?

The pods should talk across the internal pod names and addresses im assuming from logs and not have to go out through the LB public address

The absence of the loadbalancer shouldn’t affect the hub and singleuser pods, since as you say all communication should be internal to the cluster.

I’ve managed to spin up an IPv6 only K3s node with this K3S configuration:

# https://docs.k3s.io/installation/configuration#configuration-file
write-kubeconfig-mode: "0644"

cluster-init: true

# https://docs.k3s.io/networking/basic-network-options?cni=Canal#single-stack-ipv6-networking

cluster-cidr: <PREFIX>::/96
service-cidr: <PREFIX>:0001::/112

kube-controller-manager-arg:
  - node-cidr-mask-size-ipv6=96

node-name: <hostname>

# Ubuntu /etc/resolve.conf contains an internal resolver 127.0.0.53
# And may be missing an ipv6 server
kubelet-arg:
  - resolv-conf=/home/ubuntu/resolv-ipv6.conf

resolv-ipv6.conf

# https://one.one.one.one/dns/
nameserver 2606:4700:4700::1111
nameserver 2606:4700:4700::1001

and this working Z2JH config:

# helm upgrade --install jh --repo=https://hub.jupyter.org/helm-chart/ jupyterhub --version=4.0.0 -f z2jh-config.yaml --wait

custom:
  netpol: &netpol
    networkPolicy:
      enabled: false

hub:
  config:
    JupyterHub:
      default_url: /hub/home
      cleanup_servers: true
    KubeSpawner:
      ip: "[::]"
  <<: *netpol

proxy:
  service:
    type: ClusterIP
  chp:
    <<: *netpol

scheduling:
  userScheduler:
    enabled: false
  userPlaceholder:
    enabled: false

singleuser:
  image:
    name: quay.io/jupyter/base-notebook
    tag: latest
  <<: *netpol

ingress:
  enabled: true
  hosts:
    - <external-k8s-ingress-domain>

debug:
  enabled: true
1 Like

Tried exact same conf using internal dns name for hosts: value getting the same error for loopback for both internal dns address and if forward svc to localhost.

I may try K3s config setup myself also

You mentioned an “istio gateway virtual service” earlier, is istio used for anything else related to networking?

dont believe within the same namespace it routes internal pod traffic, could it be network policy blocking ? if it was would assume no loopback.