Restrict singleuser pods from accessing each other (network policy configuration)

I am trying to find a way to adjust the network policy for z2jh such that singleuser pods cannot access one another. I still want the singleuser pods to have access to any other resources (so, no additional restrictions, just denying access to other singleuser pods in the same namespace).

The template seems to always add ingress for the other singleuser pods by default:
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/1.1.3/jupyterhub/templates/singleuser/netpol.yaml

I am confused on how to accomplish this because there seems to be no way to remove the default ingress (from singleuser pods) in this policy. Any tips on the preferred way to do this?

Hi! Are you referring to particular lines in that file?

If it’s

that means any pod that has the label hub.jupyter.org/network-access-singleuser: "true" has access. Only the hub and proxy pods have that label by default.

I can see what you mean, and looking up the descriptions of the singleuser pods, I don’t see that label in their descriptions:

Labels:       app=jupyterhub
              chart=jupyterhub-1.1.3
              component=singleuser-server
              heritage=jupyterhub
              hub.jupyter.org/network-access-hub=true
              hub.jupyter.org/servername=
              hub.jupyter.org/username=XXXXX
              release=jupyter

So, maybe it’s just that by default all singleuser pods can access the others since they are in the same namespace?

I have added a default-deny network policy in that namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

Here’s the list of policies I get after using kubectl to apply it:

NAME           POD-SELECTOR                                                 AGE
hub            app=jupyterhub,component=hub,release=jupyter                 307d
proxy          app=jupyterhub,component=proxy,release=jupyter               307d
singleuser     app=jupyterhub,component=singleuser-server,release=jupyter   307d
default-deny   <none>                                                       9m51s

It just doesn’t seem to be working: I can still using nmap and see open ports on other singleuser pods:

nmap -p 8888 10.42.3.241
Starting Nmap 7.80 ( https://nmap.org ) at 2021-10-15 14:29 UTC
Nmap scan report for 10.42.3.241
Host is up (0.00040s latency).

PORT     STATE SERVICE
8888/tcp open  sun-answerbook

Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds

I’ve tried using the default-deny idea in values.yaml as well, but still see the same results.

I can also access other port/services that I run within the pods, which is what I am really trying to prevent. Having specfically 8888 open isn’t really that dangerous since you need the correct access token. Still, I would prefer finding a way to make sure that singleuser pods -cannot- access each other, but even with a default-deny policy I am having success connecting from one singleuser pod (10.42.3.242) to another (10.42.3.241).

Can you check that NetworkPolicies are correctly enabled on your K8S cluster, and that both ingress and egress policies work? Test this by manually creating pods and policies in a new namespace without Z2JH.

I have some info on that already. Mainly, the current network policies -do- seem to be enabled and working: I know this because (as a not-so-close workaround at the moment) I can block Egress on everything except DNS,SSH,HTTP,HTTPS, and this is working just fine. While this does tighten up security, it limits users in ways I wasn’t planning on: I would like them to access other ports on some other systems, just not on other singleuser pods. I thought modifying Ingress rules on the singleuser policy would be the way to go? I would want the hub and proxy to continue to access 8888 of course, but anything else should get nothing back (filtered or closed ports) from the singleuser pod.

Here’s my output from kubectl -n jupyter describe networkpolicy/singleuser (WITH the Egress restriction rules in-place):

Name:         singleuser
Namespace:    jupyter
Created on:   2020-12-11 14:10:24 -0600 CST
Labels:       app=jupyterhub
              app.kubernetes.io/managed-by=Helm
              chart=jupyterhub-1.1.3
              component=singleuser
              heritage=Helm
              release=jupyter
Annotations:  meta.helm.sh/release-name: jupyter
              meta.helm.sh/release-namespace: jupyter
Spec:
  PodSelector:     app=jupyterhub,component=singleuser-server,release=jupyter
  Allowing ingress traffic:
    To Port: notebook-port/TCP
    From:
      PodSelector: hub.jupyter.org/network-access-singleuser=true
  Allowing egress traffic:
    To Port: 8081/TCP
    To:
      PodSelector: app=jupyterhub,component=hub,release=jupyter
    ----------
    To Port: 53/UDP
    To Port: 53/TCP
    To: <any> (traffic not restricted by destination)
    ----------
    To Port: 53/UDP
    To: <any> (traffic not restricted by destination)
    ----------
    To Port: 22/TCP
    To: <any> (traffic not restricted by destination)
    ----------
    To Port: 80/TCP
    To: <any> (traffic not restricted by destination)
    ----------
    To Port: 443/TCP
    To: <any> (traffic not restricted by destination)
  Policy Types: Ingress, Egress

The nmap command under these conditions results in:

(base) jphillips@jupyter-jphillips:~$ nmap -p 8888 10.42.3.241
Starting Nmap 7.80 ( https://nmap.org ) at 2021-10-15 16:33 UTC
Nmap scan report for 10.42.3.241
Host is up (0.00042s latency).

PORT     STATE  SERVICE
8888/tcp closed sun-answerbook

Nmap done: 1 IP address (1 host up) scanned in 0.10 seconds

So, yes, they appear to be working fine. I just don’t see the deny-all for the namespace having any impact. Even though the network policy suggests that only hub and proxy should have access (matching on PodSelector: hub.jupyter.org/network-access-singleuser=true), the other singleuser pods can access one-another. Maybe this is because it’s prioritizing the netpolicy for singleuser over the deny-all? Even so, I would expect the current Ingress rules to only allow the hub and proxy, like you mentioned above. Yet, this is not what I observe: singleuser pods definitely can access one-another when I turn off the Egress restrictions and I can’t seem to figure out what to add to the current configuration to make sure they cannot.

I’ll run a check for Ingress in some way like you suggest soon…

Yes, network policies seem to be working as expected.

A minimal example:
kubectl -n test apply -f test.yaml
where test.yaml is:

---
apiVersion: v1
kind: Pod
metadata:
  name: temp1
spec:
  hostname: temp1
  securityContext:
    runAsUser: 0
  containers:
  - image: jupyter/base-notebook
    name: temp1
    imagePullPolicy: Always
    env:
      - name: GRANT_SUDO
        value: "yes"
---
apiVersion: v1
kind: Pod
metadata:
  name: temp2
spec:
  hostname: temp2
  securityContext:
    runAsUser: 0
  containers:
  - image: jupyter/base-notebook
    name: temp2
    imagePullPolicy: Always
    env:
      - name: GRANT_SUDO
        value: "yes"
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: default-deny
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress

This setup only blocks Ingress because I need to pull nmap from the apt repos in one of the pods for the test. However, I can run the nmap from one pod, see that 8888 is filtered, and then just delete the network policy and re-run the command, now seeing that 8888 is open:

(base) root@temp1:~# nmap -p 8888 10.42.0.221
Starting Nmap 7.80 ( https://nmap.org ) at 2021-10-15 17:05 UTC
Nmap scan report for 10.42.0.221
Host is up (0.00028s latency).

PORT     STATE    SERVICE
8888/tcp filtered sun-answerbook
MAC Address: DA:42:B5:D5:70:77 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 0.29 seconds
(base) root@temp1:~# nmap -p 8888 10.42.0.221
Starting Nmap 7.80 ( https://nmap.org ) at 2021-10-15 17:06 UTC
Nmap scan report for 10.42.0.221
Host is up (0.00028s latency).

PORT     STATE SERVICE
8888/tcp open  sun-answerbook
MAC Address: DA:42:B5:D5:70:77 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 0.26 seconds

Generally, things look fine here. This got me thinking a little about the default-deny policy I had in place in the Z2JH deployment, which blocked both Ingress and Egress: so I changed it to only Ingress like the example above, but still no luck there.

I have been working with different network policy changes and can’t find anything that helps. For any of my deployments using 1.1.3, all of which are using the network policies provided with Z2JH, singleuser pods can contact other singleuser pods.

Is anyone else able to replicate this issue? Any vanilla 1.1.3 install shows this problem for me.

Here is a test-run using k3d:

$ kubectl get nodes
NAME                       STATUS   ROLES                  AGE   VERSION
k3d-k3s-default-server-0   Ready    control-plane,master   21m   v1.20.2+k3s1

This is all I have in config.yaml (just to allow nmap installation):

singleuser:
  uid: 0
  cmd: start-notebook.sh
  defaultUrl: "/lab"
  extraEnv:
    GRANT_SUDO: "yes"

I can then see the user pods created with their IPs:

$ kubectl -n jupyter get pod -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP           NODE                       NOMINATED NODE   READINESS GATES
svclb-proxy-public-2xmlb          0/1     Pending   0          17m     <none>       <none>                     <none>           <none>
continuous-image-puller-pwg66     1/1     Running   0          17m     10.42.0.10   k3d-k3s-default-server-0   <none>           <none>
user-scheduler-5964cc5bd5-gntlf   1/1     Running   0          17m     10.42.0.14   k3d-k3s-default-server-0   <none>           <none>
user-scheduler-5964cc5bd5-gmh4l   1/1     Running   0          17m     10.42.0.13   k3d-k3s-default-server-0   <none>           <none>
proxy-59d6b7c9b5-pqs8r            1/1     Running   0          4m56s   10.42.0.20   k3d-k3s-default-server-0   <none>           <none>
hub-6db54664cc-8pwrk              1/1     Running   0          4m6s    10.42.0.21   k3d-k3s-default-server-0   <none>           <none>
jupyter-jphillips                 1/1     Running   0          3m21s   10.42.0.22   k3d-k3s-default-server-0   <none>           <none>
jupyter-jphillipstest             1/1     Running   0          2m28s   10.42.0.23   k3d-k3s-default-server-0   <none>           <none>

So, I exec into jphillips and I can access ports on jphillipstest:

root@jupyter-jphillips:~# cat /etc/hosts 
# Kubernetes-managed hosts file.
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.42.0.22      jupyter-jphillips
root@jupyter-jphillips:~# nmap -p 8888 10.42.0.23
Starting Nmap 7.80 ( https://nmap.org ) at 2021-10-18 16:57 UTC
Nmap scan report for 10.42.0.23
Host is up (0.00017s latency).

PORT     STATE SERVICE
8888/tcp open  sun-answerbook
MAC Address: E2:BF:49:FF:B3:78 (Unknown)

Nmap done: 1 IP address (1 host up) scanned in 0.31 seconds

Shouldn’t this pod be unable to access the other one using the default network policy?

What CNI plugin are you using?

Flannel right? I think that’s the k3s default unless my knowledge is too limited on what CNI. I’ve got another cluster with calico, so I’ll try the same tests on that one as well. I would prefer to stick with k3s defaults when possible though.

As an update: I was able to find a reasonable workaround for the moment. I can turn off the default network policy for Z2JH for singleuser and then define my own network policy which only restricts ingress on singleuser:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: singleuser
spec:               
  ingress:                                                 
  - from: 
    - podSelector:       
        matchLabels:
          hub.jupyter.org/network-access-singleuser: "true"
    ports:        
    - port: notebook-port
      protocol: TCP
  # - ports:        
  #   - port: 8081    
  #     protocol: TCP      
  #   to:                 
  #   - podSelector:
  #       matchLabels:
  #         app: jupyterhub
  #         component: hub
  # - ports:         
  #   - port: 53
  #     protocol: UDP
  #   - port: 53         
  #     protocol: TCP
  # - to:                     
  #   - ipBlock:
  #       cidr: 0.0.0.0/0
  #       except:    
  #       - 169.254.169.254/32    
  podSelector:
    matchLabels:
      app: jupyterhub
      component: singleuser-server
  policyTypes:
  - Ingress
  # - Egress

When I uncomment the egress parts (which just mimic the netpol with Z2JH), then I get the problematic behavior, so it’s looking like something particular about how the ingress/egress rules are applied.

I don’t think Flannel enforces network policy. It would be interesting to hear if your tests behaved differently on the cluster with Calico, which does enforce it.

From https://github.com/flannel-io/flannel/blob/master/README.md#networking-details :

Flannel does not control how containers are networked to the host, only how the traffic is transported between hosts. …
Flannel is focused on networking. For network policy, other projects such as Calico can be used.

Yep, that’s indeed the issue.

Clean install of k3s+calico, and the network policies are functioning completely as expected (singleuser pods cannot contact other singleuser pods).

Looks like there is some, but limited, support for network policy when using flannel. Good news for me is that mycurrent workaround with an Ingress-only policy seems to work, but I guess I will be transitioning to calico (or maybe some other k8s distro) soon.

I had thought that k3s was one of the main platforms that Z2JH ran their test suite on, but maybe it’s not k3s straight out-of-the-box?

Either way, thank you @manics and @csears for the helpful suggestions.

1 Like

Z2JH uses K3S with Calico for CI: GitHub - jupyterhub/action-k3s-helm: A GitHub action to install K3S, Calico, and Helm.

1 Like