Additional storage volumes

Tried to add additional static nfs pvc to be mounted on the pods however it does seem to work

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                               STORAGECLASS        REASON   AGE
nfs-pv                                     1Gi        RWX            Retain           Bound    default/nfs-pvc                     manual                       54m
kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
nfs-pvc        Bound    nfs-pv                                     1Gi        RWX            manual             52m
kubectl get pods -A
phub                 continuous-image-puller-8mggl                     1/1     Running   0          5d4h
phub                 continuous-image-puller-rndpl                     1/1     Running   0          5d4h
phub                 hub-6cc4f4c595-bljc9                              1/1     Running   0          48m
phub                 jupyter-des                                       1/1     Running   0          47m
phub                 jupyter-raymond                                   1/1     Running   0          42m
phub                 proxy-747cffc6db-9n9qp                            1/1     Running   0          21h
phub                 user-scheduler-fdddf9b65-t9msx                    1/1     Running   0          14d
phub                 user-scheduler-fdddf9b65-zcftr                    1/1     Running   5          14d

My configurations

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  labels:
    name: mynfs # name can be anything
spec:
  storageClassName: manual # same storage class as pvc
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 10.10.0.243 # ip addres of nfs server
    path: "/share" # path to directory
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany #  must be the same as PersistentVolume
  resources:
    requests:
      storage: 1Gi

config.yaml configuration

singleuser:
  storage:
    capacity: 2G
  extraVolumes:
    - name: jupyterhub-shared
      persistentVolumeClaim:
       claimName: nfs-pvc
  extraVolumeMounts:
    - name: jupyterhub-shared
      mountPath: /home/shared

it does not seem to appear on /home/shared on the individual user pods… what am i doing wrong?

NFS storage is tricky in itself and with k8s, so setting it up for the user pods also becomes a bit tricky.

I think you have created a volume, and your Helm chart configuration is almost correct (one space to little before claimName) in order to mount a volume. You should be able to confirm this by:

# ensure you find a volume on pod
# ensure you find a volumeMount on pod's container
kubectl get pod jupyter-myuserpod -o yaml

If you get this working, I think you may still run into issues of the NFS server providing files that the typical jovyan user with user id of 1000 won’t be allowed to access or similar, which can force you to use a “volume mount hack” as well.

I recall this thread was very useful for me when I set up NFS storage initially: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/421

Hello Erik,

addressed the space issue

singleuser:
storage:
capacity: 2G
extraVolumes:

  • name: jupyterhub-shared
    persistentVolumeClaim:
    claimName: nfs-pvc
    extraVolumeMounts:
  • name: jupyterhub-shared
    mountPath: /home/shared

but still it doesnt appear mounted.

a snippet of the -o yaml output of a user pod

value: jupyter/minimal-notebook:2343e33dec46
image: jupyter/minimal-notebook:2343e33dec46
imagePullPolicy: IfNotPresent
lifecycle: {}
name: notebook
ports:

  • containerPort: 8888
    name: notebook-port
    protocol: TCP
    resources:
    limits:
    cpu: “4”
    memory: “8589934592”
    requests:
    cpu: “1”
    memory: “1073741824”
    securityContext:
    runAsGroup: 0
    runAsUser: 1000
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
  • mountPath: /home/jovyan
    name: volume-raymond
    dnsPolicy: ClusterFirst
    enableServiceLinks: true

doesnt seem to get mounted

Desmond,

Last night, I created a guide for install Z2JH on K3s - see https://github.com/dirkcgrunwald/zero-to-jupyterhub-k3s

Part of that (basic-with-nfs-volumes) includes mounting NFS volumes including an extraVolumes that is shared across all users. That part of the guide is not specific to k3s and might help. It does use nfs-server-provisioner but I have other examples. Your example looks correct as far as I can tell – did you create the volume ahead of time?

The next part of the guide (called “fancypants”) reads the list of NFS volumes to mount per-user from a JSON file. The “fancy” part is that it also themes the launch list.

It might help as an example / playground. If you have a linux machine laying around, you can get k3s up and running in a minute and get the basic Z2JH up in less than five and then start with the NFS stuff.

1 Like

Hello Dirk,

I managed to get my jupyter pods to use nfs dynamic provisioner by using the nfs storage class that i have created… the beauty of this is now the pods can load balance themselves on various worker nodes as it spins up.
I noticed the hub pvc cant be run off the nfs pvc which is odd, therefore i leveraged on microk8s default hostpath storage class.

JupyterHub uses an sqlite database file saved on the hub volume. In theory sqlite can work on NFS if configured correctly, though I’ve never succeeded.

I have had success hosting the hub sqlite db on NFS. We are using AWS EFS to host this.

Here’s a portion of the helm chart:

hub:
  extraVolumes:
    - name: hub-db-dir
      persistentVolumeClaim:
        claimName: nfs-host-pvc

Here’s a pvc config:

apiVersion: v1 
kind: PersistentVolume 
metadata: 
  name: nfs-host-pv 
spec: 
  capacity: 
    storage: 1Gi 
  accessModes: 
    - ReadWriteMany 
  nfs: 
    server: amazon-efs-hub-storage-server
    path: "/host-hub" 
 
--- 
kind: PersistentVolumeClaim 
apiVersion: v1 
metadata: 
  name: nfs-host-pvc
spec: 
  accessModes: 
    - ReadWriteMany 
  storageClassName: "" 
  resources: 
    requests: 
      storage: 1Gi

The only thing I’ll note, is that using Helm - there was a bit of an annoyance that a basic deployment needs to be deployed, and then this change can be pushed. Little more on that was discussed here: JupyterHub hub-db-dir PV Question

Does anyone know if there’s a way where I can deploy certain volumes to certain users?