Is there a way to mount more than 1 volume?

  • Some notes:

I was directed here via this issue on github. I assume this is not a bug and is instead a support request as tagged. The issue is 3175 in jupyterhub/zero-to-jupyterhub-k8s, but unfortunately I’m limited to 2 links per post as as new user.

  • Some environment setup:

I am running z2jh via EKS on AWS. With the requirement to use a [ReadWriteMany] drive, I switched from EBS to EFS following this guide.

To provide more control I am using custom Docker images. My hub is pulling from jupyterhub/k8s-hub:3.0.0-beta.3 while my user is pulling from jupyter/minimal-notebook:2023-07-17. Currently - these just change the environment variables from jovyan to a custom user name, and runs usermod to reflect the changes.

USER root
ENV NB_USER ${NB_USER}
ENV HOME ${HOME}
ENV PWD ${PWD}
ENV JUPYTER_SERVER_ROOT ${JUPYTER_SERVER_ROOT}
ENV NB_UID ${NB_UID}
ENV NB_GID ${NB_GID}
RUN usermod -l ${NB_USER} jovyan
RUN usermod -d ${HOME} -m ${NB_USER}
WORKDIR ${HOME}
USER ${NB_USER}
CMD ["/bin/bash", "-c", "fix-permissions /home;start-singleuser.sh"]

With using jupyter/minimal-notebook I following this suggestion to mount the EFS.

  • The overall goal/requirement:

Have one volume location for user storage, and a separate volume for shared storage between users.
The EFS should reflect:

β”œβ”€β”€ home
β”‚   β”œβ”€β”€ shared
β”‚   β”œβ”€β”€ user1
β”‚   β”œβ”€β”€ user2
...

While the user pod should reflect:

β”œβ”€β”€ home
β”‚   β”œβ”€β”€ custom
β”‚   β”‚   β”œβ”€β”€ user1
β”‚   β”‚   β”œβ”€β”€ shared
...
  • Baseline 1:

I can connect to the EBS using static storage without issue, the below works as expected.

  storage:
    type: static
    homeMountPath: /home/custom/{username}
    static:
      pvcName: static-pvc
      subPath: "{username}"
  • Baseline 2:

I can connect to the EBS using an initContainer without issue, the below works as expected.

  storage:
    type: none
    extraVolumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: shared-pvc
    extraVolumeMounts:
      - name: persistent-storage
        mountPath: /home/custom/shared
        subPath: shared
    capacity: 5Gi
  initContainers:
    - name: nfs-fixer
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: persistent-storage
        mountPath: /nfs
        subPath: shared
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chown -R 1000:100 /nfs) 
  • Attempt 1:

Mark type: static while also using an extraVolume / extraVolumeMounts.

Issue: The pod hangs on spawning.

  storage:
    type: static
    homeMountPath: /home/custom/{username}
    static:
      pvcName: static-pvc
      subPath: "{username}"
    extraVolumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: shared-pvc
    extraVolumeMounts:
      - name: persistent-storage
        mountPath: /home/custom/shared
        subPath: shared
    capacity: 5Gi
  initContainers:
    - name: nfs-fixer
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: persistent-storage
        mountPath: /nfs
        subPath: shared
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chown -R 1000:100 /nfs) 

Neither the hub logs nor the user pod have any relevant information, and eventually the pod times out with

2023-07-27T22:25:54Z [Warning] Unable to attach or mount volumes: unmounted volumes=[home], unattached volumes=, failed to process volumes=[home]: timed out waiting for the condition

  • Attempt 2:

Use two separate initContainers.

  storage:
    type: none
    extraVolumes:
      - name: persistent-shared
        persistentVolumeClaim:
          claimName: shared-pvc
      - name: persistent-user
        persistentVolumeClaim:
          claimName: user-pvc
    extraVolumeMounts:
      - name: persistent-shared
        mountPath: /home/custom/shared
        subPath: home/custom/shared
      - name: persistent-user
        mountPath: /home/custom/{username}
        subPath: home/custom/{username}
  initContainers:
    - name: nfs-shared
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: persistent-shared
        mountPath: /nfs/shared
        subPath: home/custom/shared
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chown -R 1000:100 /nfs)
  initContainers:
    - name: nfs-user
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: persistent-user
        mountPath: /nfs/user
        subPath: home/custom/personal/{username}
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chown -R 1000:100 /nfs) 

Again, the pod times out with a similar message.

2023-07-28T18:28:56Z [Warning] Unable to attach or mount volumes: unmounted volumes=[persistent-shared], unattached volumes=, failed to process volumes=[persistent-shared]: timed out waiting for the condition

While the hub/user logs doesn’t seem to be helpful, the describe pod output for the user pod may have something relevant, though I’m unsure:

Volumes:
    persistent-shared:
        Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:  shared-pvc
        Type:        PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
        ClaimName:   user-pvc
        ReadOnly:    false

Both PVC appear to be on the same level, unstead the persistent-shared volume. However, in the config.yaml they are set as different volumes.

Though other relevant sections appear as expected:

Containers:
    Mounts:
        /home/custom/username from persistent-storage (rw,path="home/custom/personal/username")
        /home/custom/shared from persistent-shared (rw,path="home/custom/shared")
Init Containers:
    nfs-shared-fix:
        Container ID:
        Image:         alpine
        Image ID:
        Port:          <none>
        Host Port:     <none>
        Command:
            sh
            -c
            (chmod 0775 /nfs; chown -R 1000:100 /nfs)
        State:          Waiting
            Reason:       PodInitializing
        Ready:          False
        Restart Count:  0
        Environment:    <none>
        Mounts:
            /nfs/user from persistent-storage (rw,path="home/custom/personal/username")
    nfs-user-fix:
        Container ID:
        Image:         alpine
        Image ID:
        Port:          <none>
        Host Port:     <none>
        Command:
            sh
            -c
            (chmod 0775 /nfs; chown -R 1000:100 /nfs)
        State:          Waiting
            Reason:       PodInitializing
        Ready:          False
        Restart Count:  0
        Environment:    <none>
        Mounts:
            /nfs/shared from persistent-shared (rw,path="home/custom/shared")

Is there any other way I’m missing to mount more than one volume?

I think will be related to EKS or how your volumes are setup. Z2JH generates the Kubernetes manifests, but it relies on the cluster to fulfill them.

What happens if you remove the init containers, and just mount them into the singleuser pod but outside the home directory? This will help to rule out an interaction between mounting the the volumes twice.

What does kubectl describe pv,pvc show?

@manics thanks for the response -

Describe before suggested changes:

> kubectl describe pvc -n custom-dev shared-z2jh-pvc
Namespace:     custom-dev
StorageClass:  shared-efs-sc
Status:        Bound
Volume:        shared-z2jh-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> kubectl describe pv -n custom-dev shared-z2jh-pv
Name:            shared-z2jh-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    shared-efs-sc
Status:          Bound
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        20Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            efs.csi.aws.com
    FSType:
    VolumeHandle:      fs-xxxxxxxxxxxxxxx
    ReadOnly:          false
    VolumeAttributes:  <none>
Events:                <none>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> kubectl describe pvc -n custom-dev user-z2jh-pvc
Namespace:     custom-dev
StorageClass:  user-efs-sc
Status:        Bound
Volume:        user-z2jh-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> kubectl describe pv -n custom-dev user-z2jh-pv
Name:            user-z2jh-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    user-efs-sc
Status:          Bound
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        20Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            efs.csi.aws.com
    FSType:
    VolumeHandle:      fs-xxxxxxxxxxxxxxx
    ReadOnly:          false
    VolumeAttributes:  <none>
Events:                <none>

Next step, I implement the suggested changes. I’ll remove the initContainers, and I’m making the assumption to keep the storage.type: none in an attempt to mount both under extra.
My config.yaml:

  storage:
    type: none
    extraVolumes:
      - name: persistent-shared
        persistentVolumeClaim:
          claimName: shared-z2jh-pvc
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: user-z2jh-pvc
    extraVolumeMounts:
      - name: persistent-shared
        mountPath: /mnt/shared
        subPath: home/custom/shared
      - name: persistent-storage
        mountPath: /mnt/usr
        subPath: home/custom/personal/{username}

The results are the same.

2023-08-01T14:54:45Z [Warning] Unable to attach or mount volumes: unmounted volumes=[persistent-shared], unattached volumes=, failed to process volumes=[persistent-shared]: timed out waiting for the condition

Describe after suggested changes:

> kubectl describe pvc -n custom-dev shared-z2jh-pvc
Namespace:     custom-dev
StorageClass:  shared-efs-sc
Status:        Bound
Volume:        shared-z2jh-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> kubectl describe pv -n custom-dev shared-z2jh-pv
Name:            shared-z2jh-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    shared-efs-sc
Status:          Bound
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        20Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            efs.csi.aws.com
    FSType:
    VolumeHandle:      fs-xxxxxxxxxxxxxxx
    ReadOnly:          false
    VolumeAttributes:  <none>
Events:                <none>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> kubectl describe pvc -n custom-dev user-z2jh-pvc
Namespace:     custom-dev
StorageClass:  user-efs-sc
Status:        Bound
Volume:        user-z2jh-pv
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      20Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Used By:       <none>
Events:        <none>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> kubectl describe pv -n custom-dev user-z2jh-pv
Name:            user-z2jh-pv
Labels:          <none>
Annotations:     pv.kubernetes.io/bound-by-controller: yes
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    user-efs-sc
Status:          Bound
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        20Gi
Node Affinity:   <none>
Message:
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            efs.csi.aws.com
    FSType:
    VolumeHandle:      fs-xxxxxxxxxxxxxxx
    ReadOnly:          false
    VolumeAttributes:  <none>
Events:                <none>

Note: Both user/shared pvc show Used By: jupyter-pastrami attribute while attempting to mount (I login the user and during the Your server is starting up. screen until timeout.

I think the culprit may be an undocumented EKS/EFS limitation:

has a possible solution

I also found

which I don’t think is a problem for you, but worth bearing in mind just in case

1 Like

Confirming this worked as expected @manics - by placing each PVC/PV pair onto it’s own EFS, I was able to mount without issue using the intiContainers method above.

Thanks for your help!

1 Like

I thought it also important to note -
I was also able to mount the volumes in the traditional way, now that these are on separate devices (EFS’s). Meaning I was able to remove the initContainers completely and type: static, while also mounting just one extraVolumes and extraVolumeMounts.

1 Like