Recommendations on Dynamic PersistentVolumes on Newer AWS EKS Versions

I’ve been walking through z2jh documentation which is thorough and great!

I set up an AWS EKS cluster with eksctl. I then helm install jupyterhub-2.0.0. The newer versions of EKS tend to use the EBS CSI driver for dynamic PersistentVolume provisioning using the annotation:

Your older storage documentation (guessing it’s older) seems to talk about EFS and I don’t see anything about the EBS CSI driver on there.

Should I not be using EBS CSI and using EFS? What is the recommended next steps after deploying jupyterhub to get it working with the EBS CSC driver? Currently my hub pod is in a "PENDING" state. I don’t want to make this a bug thread and can work with the EBS CSI community over there on potential resolutions.

What I’m looking for are pointers/recommendations from the experts in this community about how they went about installing v2.0.0 on newer EKS clusters. I could then contribute back to z2hk with some docs on the solution.

I’m about to follow this documentation to troubleshoot problems. Any other threads/docs I should be following besides this?

Version Info:

$ eksctl info   
eksctl version: 0.127.0-dev+d97310bd4.2023-01-27T12:47:55Z
kubectl version: v1.25.0

There’s a related thread here:

I think the best approach is to get EBS CSI working on your cluster ignoring JupyterHub (test it with a manually created Pod/PVC and check the PV is dynamically created), then work on the JupyterHub configuration.

For posterity’s sake here’s the correct order from eksctl create cluster through helm install jupyterhub on EKS 1.24:

Create the cluster:

eksctl create cluster \
--name blah --region us-west-1 \
--ssh-access=true --ssh-public-key=~/.ssh/ \
--nodegroup-name=hub-node --node-type=t2.xlarge \
--nodes=1 --nodes-min=1 --nodes-max=5

Check to see if you have an OIDC provider in EKS already

If you just created you cluster you probably don’t, but hey bash is fun:

oidc_id=$(aws eks describe-cluster --name blah --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)
aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4
# no output means no existing provider

create the OIDC provider

eksctl utils associate-iam-oidc-provider \
--cluster blah--region us-west-1 --approve

Create the IAM Role that the local K8 ServiceAccount Uses:

Creating the Amazon EBS CSI driver IAM role for service accounts - Amazon EKS

Ignore some of the document comments here. The k8 ServiceAccount and controller isn’t created yet even though the docs make you think it already is

eksctl create iamserviceaccount \
  --region us-west-1 \
  --name ebs-csi-controller-sa \   # name the ServiceAccount in k8 (cannot change)
  --namespace kube-system \
  --cluster blah \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --role-only \
  --role-name <blah-role>  # name of the IAM Role

Create the ebs-csi driver addon

This step creates k8 ServiceAccount and ebs-sci pods as well:

eksctl create addon --name aws-ebs-csi-driver \
--region us-west-1 --cluster blah \
--service-account-role-arn arn:aws:iam::<your-aws-account-id>:role/<iam-role-name-from-last-step> --force

Check to make the ServiceAccount has an annotation of your IAM role:

$ kubectl get sa ebs-csi-controller-sa -n kube-system -o yaml | grep -a1 annotations
  annotations: arn:aws:iam::<aws-account-id>:role/<iam-role-name>

Check to make sure we have controller pods up for ebs-csi

kubectl get pod  -n kube-system | grep ebs-csi
ebs-csi-controller-5cbc775dc5-hr6mz   6/6     Running   0          4m51s
ebs-csi-controller-5cbc775dc5-knqnr   6/6     Running   0          4m51s

Install JupyterHub:

helm upgrade --cleanup-on-fail \
  --install blah-hub jupyterhub/jupyterhub \
  --namespace jupyterhub \
  --create-namespace \
  --version=2.0.0 \
  --values config.yaml
1 Like

Does anyone know if the binderhub helm install supports eks 1.25 using the gp2 storageclass? Looks like the hub-db-dir pvc gets created but the argument seems to get ignored. I have tried [0.2.0-n1011.hb49edf6] and the latest 1.0.0 dev release [] with no luck.