I’m bringing this question over from gitter because it may be a longer-term discussion.
I have a question for people deploying Z2JH on Google GKE. I’ve deployed an (external) NFS sever using U18.04 on a VM. I can mount the NFS shares on other instances in GCE. However, I can not mount the shared on instances created in GKE node-pools, much less mount them in pods. I can ping the NFS server, but the nfs mount requests appear to just hang. I’m doing this from the U18.04 nodes on which pods are deployed in an attempt to debug why pods themselves can’t mount NFS.
So, my question: If you’ve gotten NFS to work in such a situation, can you share your configurations and/or experience on how you got it to work?
I’m using the configuration at https://github.com/berkeley-dsep-infra/datahub/blob/22022e5cfbf6d610eb01fc49ac2277f9e0645f03/docs/topic/cluster-config.rst and also modified as below (disabling ip-alias and network policy).
In both cases, I can’t mount NFS on the nodes themselves. Clearly there’s a firewall involved, but I can’t seem find a way to either disable it or allow the local connections.
gcloud beta container clusters create
gcloud container node-pools create
–min-nodes 0 --max-nodes 20