I have set up the size of the PersistentVolumeClaim for the users notebooks at 80Gi but the users are able to still use their notebooks along with their volumes when this size is exceeded thus the capacity is not having any actual meaning that just being a label.
I’m pretty sure that this is somehting more related to Kubernetes itself but is there any recommendation from Jupyter’s side about this?
You’re correct that it’s to do with Kubernetes not Jupyter. Most public cloud providers will support a fixed size volume claim (though they may have a minimum) on their managed Kubernetes services- check the docs for your cloud provider.
If this is a self-managed K8s cluster there are lots of storage controllers, some support storage quotas, others ignore the field.
The best option really depends on your tradeoff for cost, resilience, time to manage storage, etc. I can’t think of any easy-to-deploy storage provisioners that don’t require external storage.
Kubernetes provides an abstraction layer which hides the implementation details of things likes servers/compute, networking, storage, etc. Public cloud providers deal with most of this, but if you’re running an on-prem K8s cluster you’re responsible for these implementation details, since they’re dependent on your hardware.
E.g. There are many types of on-prem network storage, the storage controller sits between your physical storage and your requests for Kubernetes volumes.