Memory Allocation Error

Hi,

Some of my users are experiencing memory allocation error when the Free Memory goes low but the Available Memory still having so much spare to be used. Is there a way I can configure to assign the Available Memory to spin up new instances when the Free Memory is low?

image

I believe schedulers only take the memory reservations/guarantees into account, not memory usage or limits, so unfortunately not to my knowledge. The only way to achieve this is to have a smaller difference between memory limit and guarantee so the guarantees more accurately reflect usage. The most conservative and stable (and expensive) choice is to use the same number for both.

Thanks for the feedback.
This are not scheduled jobs rather than the data science users runs their daily python to query some large data sets for their modelling purposes. Will there be any paramter tuning required done in the .py file that may help to contribute to this memory alloc problems?

What sort of JupyterHub deployment is this? What Spawner are you using? Generally, user sessions can have memory limits and guarantees to limit how many resources each user can consume.

Yes, I’m sure there are, but I can’t help with what, specifically. Anyone would need a lot more specific information about what’s being run in order to to do that, and I’m not an expert, so probably can’t help myself.

It is the context in which these jobs are run, e.g. the Spawner in JupyterHub, which sets limits on how much resources can be used by any given process. If a user exceeds these limits, their job or process will be killed instead of taking down other things on the system. Asking for too much resources is generally going to result in failure of some kind, but you can make it more likely that it is the process requesting the memory that fails, rather than more critical or shared resources.