Detecting CPU and RAM limits on mybinder.org

This is based on a question ask in the chat by Brooks Ambrose (no forum account?):

Is there a way for a program running inside a BinderHub pod to detect directly what limits are imposed on it? I basically want to write the program to automatically adapt its core usage depending on whether it’s being run under resource limits or not.

For those only here for the answer: For the specific case of mybinder.org checking the value of the CPU_LIMIT environment variable will tell you how many cores you can use. MEM_LIMIT is the corresponding variable for memory usage (in bytes).


For some context, this is how we arrived at wanting to be able to do this:

Say a program detects cores to try to take full advantage of parallel processing, which makes sense locally when the whole system is available, but perhaps not when limits are enforced on the remote. There’s a big gap between a 1 core limit and 16 detected cores. So say my program detects the 16 on the node, and within the pod starts a process on all of them.
When the combined load hits the limit, what happens? Do the processes wait in line, or are they all throttled to fit collectively under the limit? Something else entirely? Same question regarding memory allocation.

There does not seem to be a generic way to detect the number of actual “cores” when running inside a docker container or kubernetes pod. You will always be told the number of cores on the host machine.

There is no (additional) penalty for using more CPU than you have been allocated, your processes get throttled and that is that. However you might use up part of your allocated resources with the “overhead” of starting each new process or thread (namely RAM and CPU). So overall I think it makes sense not to start 16 processes when you only have 1 core available. Your process(es) won’t get killed or evicted for using more than their share of CPU. They will get throttled instead.

A BinderHub tells you the CPU limit enforced on you via the CPU_LIMIT environment variable.

There does not seem to be a generic way to detect the memory limit enforced on you inside a docker container or kubernetes pod. Once the total memory used by all processes in your pod exceed the memory limit, they all become eligible for being “OOM killed” (OOM = out of memory).

I am unsure if the result of being OOM killed is that your whole pod gets removed (this is what the docs make me think) or if processes get killed individually, which might lead to your pod being killed as a side effect (I think this is the case because of how using to much memory manifests for BinderHub users). Often what will happen is that you allocate too much memory in your kernel, which then gets killed. The notebook server itself (and the pod) continue to run though. Users experience this as “kernel died” when they run the cell in the notebook that takes them past the limit.

A lot more details in the kubernetes documentation on memory limits and this tech deep dive.

A BinderHub tells you the memory limit enforced on you via the MEM_LIMIT environment variable.

There are notebook extensions which will show you how much RAM and CPU you are using as well as how close you are to the limit:

There are other forum threads which touch on this topic:

If you know more about this topic or spot any errors please let me know or add a message to this thread.

3 Likes

There does not seem to be a generic way to detect the memory limit enforced on you inside a docker container or kubernetes pod.

Hi! I’m using the following code snippets for CPU, MEM, and GPU limit and usage in k8s environment.
As for CPU and MEM, the hack is to get the limits from in-container cgroup conf.

    def get_curr_cpu_usage(self):
        def get_cpu_percent(p):
            try:
                return p.cpu_percent(interval=0.1)
            except Exception:
                return 0

        return sum([get_cpu_percent(p) for p in psutil.process_iter()])

    def get_cpu_quota_within_docker(self):
        cpu_cores = None
        cfs_period = Path("/sys/fs/cgroup/cpu/cpu.cfs_period_us")
        cfs_quota = Path("/sys/fs/cgroup/cpu/cpu.cfs_quota_us")

        if cfs_period.exists() and cfs_quota.exists():
            with cfs_period.open('rb') as p, cfs_quota.open('rb') as q:
                p, q = int(p.read()), int(q.read())
                # get the cores allocated by dividing the quota
                # in microseconds by the period in microseconds
                cpu_cores = math.ceil(q / p) if q > 0 and p > 0 else None

        return cpu_cores * 100.0 if cpu_cores is not None else 0
    def get_curr_memory_usage(self):
        # `sum` is fine since the kernel is spawned in the isolated env
        # by enterprise-gateway.
        return sum([p.memory_info().rss for p in psutil.process_iter()])

    def get_memory_quota_within_docker(self):
        mem_limit_path = Path("/sys/fs/cgroup/memory/memory.limit_in_bytes")

        if mem_limit_path.exists():
            with mem_limit_path.open('rb') as mem_limit_file:
                mem_bytes = int(mem_limit_file.read())

        return mem_bytes

As for GPU, I’m using nvidia-ml-py3 library.

    def get_gpu_info(self):
        try:
            nvidia_smi.nvmlInit()
            device_count = nvidia_smi.nvmlDeviceGetCount()

            def _create_gpu_info(idx):
                handle = nvidia_smi.nvmlDeviceGetHandleByIndex(idx)
                util = nvidia_smi.nvmlDeviceGetUtilizationRates(handle)
                mem = nvidia_smi.nvmlDeviceGetMemoryInfo(handle)

                return dict(gpu=idx,
                            name=nvidia_smi.nvmlDeviceGetName(handle).decode(),
                            util=util.gpu,
                            mem=dict(used=mem.used, total=mem.total))

            return [_create_gpu_info(idx) for idx in range(device_count)]
        except Exception:
            return []

Using those snippets, I’m working on making the labextension as below. (Hope to make it complete soon…)
image

2 Likes

Nice work!

Would it be possible to add the CPU/mem information based on cgroups to https://github.com/yuvipanda/nbresuse which is becoming a commonly used backend for extensions showing resource usage.

4 Likes

That’s a great idea. Let me send a new PR to that repo :slight_smile:

3 Likes

Sorry for the noise, but I just want to say that I :heart: this community. This post just helped me to allocate a hub for a classroom without struggle, since I could test both the memory available in a normal binder, and then check that running typical class notebooks is totally not going to explode the memory limit :grinning_face_with_smiling_eyes:

1 Like