I’m currently encountering an issue where a custom language kernel (https://github.com/microsoft/iqsharp) keeps getting restarted when run on mybinder (repro at Binder), but I can’t reproduce the issue locally when running Docker on the same image. Is it possible to get to the debug logs left by Jupyter Notebook, so that I can tell why the Notebook front end thinks our kernel has died? Thank you for your help!
The short answer is: your kernel has died. The more important question is why. Some investigation suggests that it is getting killed for allocating more than the allowed 2GB of RAM.
However, to answer the question about server logs:
As long as you don’t use a custom Dockerfile, Binder uses an entrypoint to tee the logs to a local file (
.jupyter-server-log.txt in the repo root). However, it looks like you are using a custom Dockerfile, which means this functionality is not provided. You would have to reimplement it yourself. The repo2docker implementation is here, and used as
ENTRYPOINT /usr/local/bin/repo2docker-entrypoint in the Dockefile if you want to use that for reference.
btw, this simpler entrypoint should work most of the time:
#!/bin/sh export PYTHONUNBUFFERED=1 exec "$@" 2>&1 | tee .jupyter-server-log.txt
but we had to implement our own to ensure certain flush characteristics that come up when running repo2docker locally, but that shouldn’t (I think) come up on Binder.
I ran your Binder and did encounter the same issues. There wasn’t useful info in the server logs, though - the process was just killed. To get better debugging, I ran the kernel directly in one terminal session (copy/paste kernel command) and connected a client from another terminal. Ultimately, though, the process appears to be getting killed by the node’s OOMKiller, so it may just be a memory limit issue. When I start an iqsharp kernel, it immediately allocates 1.6G of RAM, which is close to the 2GB limit on mybinder.org before running any code.
@minrk: Thank you for the help, that makes a lot of sense! I didn’t appreciate that about the entrypoint for capturing logging info, but that will definitely help a lot going forward. Thank you so much!