Using Azure Disks as Persistent Volume

We are currently evaluating the Autoscaling functionality on Jupyterhyb - which is deployed on our Azure Kubernetes Infrastructure using the helm chart from here - See installation instructions for: | JupyterHub’s Helm chart repository

We are using Standard SSD Disk(using default storageclass) by default as storage(dynamic) for the singleuser pods. One of the challenges we have experienced is: Say If, there are 3 nodes in the current nodepool and we have pods distributed across them, with respective SSD PVs attached. Upon inactivity, the pods die and automatically the cluster scales down to minimum - say 1 node. Later when the users come back online, they face challenges in finding their persistent volumes - which were created before and tied to one of the scaled down nodes, eventually resulting in spawn failures.

We thought of moving to Azure Fileshare as an alternative to this - but would like to hear from you if this is a known drawback of using Azure Disks or there’s something we have missed to configure. Please note we have used the default Storage Class with "volumeBindingMode: WaitForFirstConsumer".

Also would like to understand - between Azure Fileshare and Azure Disks, what would be the Jupyter recommended option for persisting the storages.