About server specifications required to use jupyterhub with 50 people

I am an engineer currently developing a service using jupyterhub.

I want to create an environment where about 50 people can access at the same time and use jupyterhub comfortably.

When I started jupyterhub with EC2 c5.large instance of aws, it stopped working with about 20 people.

What do you think is the cause?

Could you please tell me how to estimate the memory and cpu performance required for stable use of jupyterhub?

Looking at the following article, c5.large has only 4GB of memory, so I expect it to be out of memory.
https://tljh.jupyter.org/en/latest/howto/admin/resource-estimation.html#howto-admin-resource-estimation

In addition, the memory was set to be limited to 128MB and executed.

It depends on what your users are doing. For example, running computationally intensive simulations or machine learning will require a lot more memory and CPU than someone writing very basic scripts. You can either monitor the CPU and memory use of one or two users and work what you need to scale it up, or monitor the total CPU/memory usage of your server and if you hit the limit switch to a bigger server.

thank you for your reply.

I see, it depends on the performance required by each person.

By the way, when I reviewed the settings again, there were no restrictions in the first place.

It works comfortably to some extent when used with the restrictions firmly applied.

By the way, is it okay to recognize that jupyterhub stops working when memory overflows?