I have a notebook that I’m reading some data. This notebook is managed by jupyter hub that I’ve deployed via helm chart. The point is, when I try to fetch a dataframe using deltalake library, the kernel dies unexpectally. The thing is, that this dataframe is not big, but when I try to read smaller dataframes, it works. So, it seems to be resource problem. At this moment, the single user pods are running with 1.5 vCPU and 4GB memory. When monitoring pod usage, there’s no high usage of memory or CPU. I’ve been reading on stackoverflow forum about configuring some kind of jupyer_hub_config.py files about memory and buffer size, but, I couldn’t find exactly how to do it.
Also, I found this repository, that mentions something about config MEM_SIZE in the spawner. "
MEM_LIMIT` environment variable. This is set by [JupyterHub](https://github.com/jupyterhub/jupyterhub/) if using a spawner that supports it.
Here’s the (link)[jupyter-resource-usage/README.md at bb960b89adabc96d9c79a351063777f2cdfeba7b · jupyter-server/jupyter-resource-usage · GitHub].
Question: is there any resource limitation that can be configured inside the jupyter itself? It seems that the resourced defined in singleUser are not beign respected.
cpu:
limit: 1.5
guarantee: 1
memory:
limit: 4G
guarantee: 4G