apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
labels:
name: mynfs # name can be anything
spec:
storageClassName: manual # same storage class as pvc
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 10.10.0.243 # ip addres of nfs server
path: "/share" # path to directory
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteMany # must be the same as PersistentVolume
resources:
requests:
storage: 1Gi
NFS storage is tricky in itself and with k8s, so setting it up for the user pods also becomes a bit tricky.
I think you have created a volume, and your Helm chart configuration is almost correct (one space to little before claimName) in order to mount a volume. You should be able to confirm this by:
# ensure you find a volume on pod
# ensure you find a volumeMount on pod's container
kubectl get pod jupyter-myuserpod -o yaml
If you get this working, I think you may still run into issues of the NFS server providing files that the typical jovyan user with user id of 1000 won’t be allowed to access or similar, which can force you to use a “volume mount hack” as well.
Part of that (basic-with-nfs-volumes) includes mounting NFS volumes including an extraVolumes that is shared across all users. That part of the guide is not specific to k3s and might help. It does use nfs-server-provisioner but I have other examples. Your example looks correct as far as I can tell – did you create the volume ahead of time?
The next part of the guide (called “fancypants”) reads the list of NFS volumes to mount per-user from a JSON file. The “fancy” part is that it also themes the launch list.
It might help as an example / playground. If you have a linux machine laying around, you can get k3s up and running in a minute and get the basic Z2JH up in less than five and then start with the NFS stuff.
I managed to get my jupyter pods to use nfs dynamic provisioner by using the nfs storage class that i have created… the beauty of this is now the pods can load balance themselves on various worker nodes as it spins up.
I noticed the hub pvc cant be run off the nfs pvc which is odd, therefore i leveraged on microk8s default hostpath storage class.
JupyterHub uses an sqlite database file saved on the hub volume. In theory sqlite can work on NFS if configured correctly, though I’ve never succeeded.
The only thing I’ll note, is that using Helm - there was a bit of an annoyance that a basic deployment needs to be deployed, and then this change can be pushed. Little more on that was discussed here: JupyterHub hub-db-dir PV Question