Are there any out-of-the-box reports in JupyterHub (baremetal) that show how it has been consumed by users?
I guess the suggestion on this question is for Docker-based deployments only.
Are there any out-of-the-box reports in JupyterHub (baremetal) that show how it has been consumed by users?
I guess the suggestion on this question is for Docker-based deployments only.
I’m not aware of anything “out of the box”, though it’s theoretically doable.
I was intrigued enough to see how it might be done. If you’re using the SystemdSpawner there’s a property to enable accounting of some resources using cgroups
You can set
c.SystemdSpawner.unit_extra_properties = dict(
CPUAccounting="yes",
)
to enable CPU accounting in the systemd unit for each user. There are other options including MemoryAccounting
and BlockIOAccounting
.
You can then obtain CPU stats under /sys/fs/cgroup
, e.g. using TLJH:
/sys/fs/cgroup/cpuacct/system.slice/jupyter-<username>.service/cpuacct.*
Thanks! That seems cool, but I’m using the default LocalProcessSpawner, so no unit_extra_properties. My needs so far are not that granular, so maybe I can get away with some monitoring/reporting tool for Linux instead. Any pointers would be highly appreciated.
If you’re using a Linux distribution that supports cgroups v2 you might be able to get per-user stats with no additional work.
For example, on Fedora 35 every user has a /sys/fs/cgroup/user.slice/user-<UID>
directory.
There might be other ways without cgroups but I’m not familiar with them.