Automatically running jupyter notebooks and persisting memory


Every sunday, after my workstation is reboot, I have to manually toggle to multiple jupyter notebooks and run them so that I can have the work persisted in memory for the week. I am interested in automating this process. Everything i’ve read up until now (papermill, using nbconvert in api or nbconvert command line + cron, jupyter scheduler) seems to do the same thing in the backend, which is locally run a notebook and copy its output to some new file (or replace the original file).

I start my jupyter server via cron. I would like some way to specify some notebooks and do the equivalent of “restart kernel and run all”, exiting at the first error.

How can I do this?

You can use the api to start the server: JupyterHub REST API — JupyterHub documentation (or you can hit /hub/spawn directly)

But I don’t know how to run a notebook so that you can access it later. It probably depends on your spawner.


Perhaps the simplest (it may not be that simple!) would be to use something like playwright to script a headless browser actually opening the document and clicking restart & run all in JupyterLab.

nbconvert --execute --inplace would almost work, if you could tell it to use an existing kernel (connecting to an existing kernel works t the library level, but is not exposed as an option). A custom ExecutePreprocessor could override setup_kernel to connect to an existing kernel instead of starting a new one, I believe. Then the steps would be to use the REST API to launch a ‘session’ to associate the kernel with the notebook and return the kernel id, and then nbconvert --execute --inplace --kernel-id a-b-c-d with your custom execute preprocessor.