How to enable autosave whenever I run a script either modified or unmodified?

I make a change to an existing .ipynb and run it with
Kernel → Restart Kernel and Run All Cells and quit without saving it deliberately. Why is it not
saved automatically since I run it at least one time. Any tips on autosave?

I would think by default no one would expect a run notebook to clobber the previously existing version? If I re-ran a, existing notebook and realized this wiped out work I had previously done and needed, I’d be thankful it didn’t save it automatically at the conclusion of the ‘Run all’ because I can quit out and still have the last run notebook intact. Others like clean notebook, such as Ido, and so after it is run to make sure it runs, it can be cleared and explicitly saved.

At any rate, you can set the autosave time to every second, see here and here. Maybe that would get you closer to what you need?
However, to get better help, you may want to describe your current use case and what you want to achieve. For example, if you are trying to run multiple notebooks and have them save the run version, then you’d be better off controlling this on the command line or from inside another notebook using nbconvert, jupytext, or papermill to run the notebooks through and save with the output stored.

Plus, there are checkpoints to consider.

1 Like

One link says Jupyter by default autosaves every 2 minutes.

Hence it seems if I pop up and change an ipynb in a browser and run it immediately and am satisfied with the change, I should
consciously save (Clt-S) it before I quit. Anyway, I am a vi user, where I have not used the autosave (not sure even there is such an option for vi), things are much cleaner. If I try to quit vi without saving the editor won’t just let me go unless I tell it exactly what to do. I guess a totally different strategy needs to be adopted in ipynb. As for now, I expect the ipynb will be
changed if I make a change and stay long enough and I should just diligently save a modified .ipynb when I am about to quit so that I will not get a surprise down (some surprises can be painful as we might repeat a long simulation etc) the road when I reuse the script (i do not want to second guess if I have spent more than 2 mins on the modified notebook and autosave has been activated. To consistently maintain an autosave of defensive 1 sec say on every computer I use will be too excessive too).

Maybe you are using your Jupyter differently on your system or your browser settings interfere. If I try to navigate away after a change, it does alert me so that I can save. Similar to vi trying to prompt you.

There’s keyboard shortcuts that let you save after a significant step. It is probably easier to adopt getting in that habit than setting up something. Or if as a vi user you are used to the terminal, you may just want to explore running notebooks from the terminal using nbconvert or jupytext and just reserve the graphical user interface for your development within the notebooks. (Actually by incorporating jupytext in your toolchain you can use vi, or any other text editor, to edit text files that you convert into your notebooks.) Another popular option is to use VS Code to develop and edit notebooks. That may add some abilities you seek.

Thank you. I think I will slowly getting used to the nitty gritty of jupyter notebook. I have used nbconvert to convert an .ipynb to a .py file so that I can run this .py on a remote computer without gui.
Not sure of what situation it is good to use jupytext.

In case you didn’t know, you can also execute a notebook on a remote computer using nbconvert, skipping the conversion to a python script, see here.

For most of what you’ve discussed so far, jupytext, is largely parallel with nbconvert use. I find jupytext adds some additional convenience for when I want to generate notebooks from text by also allowing the option of using markdown, see here. It adds a number of additional benefits and some automation. One example automation is keeping paired notebooks and alternate representations in sync, see here.

1 Like

I tried out

jupyter nbconvert --to notebook --execute mynotebook.ipynb

(with my own mynotebook.ipynb of course). I did this inside a
screen command, but the outcome was disaster. I could not
detach the screen by Ctl-a + d (I enter the screen mode by typing
“screen” on command line) so that I can log out from remote computer while letting
the script does its job inside screen. Ended up having to kill many mpi jobs since my ipynb
sequentially spawns relative short MPI jobs.

jupytext --to markdown notebook.ipynb # convert notebook.ipynb to a .md file

So the idea is use any editor (actually I only know a bit about .md)
that can handle .md file to make changes to the “original” code in .md environment
and then use the updated .md to output a new notebook.ipynb?

You may want to make a new post about this after you try some more stuff. I’ve worked with screen in other contexts but not with jupyter nbconvert. I would suggest testing a simplistic notebook for testing that doesn’t involve spawning other jobs. You could make it take a few minuntes to finish by using time.sleep(300) as a step in there before a last real step. Who knows maybe the jupyter part of that has something that monitors for special commands?
Then if that simple notebook still causes a failure of screen where you are working, can you try jupytext for the simple notebook and see if that has the same issue?

That is one way to use it. For a few uses, I’ve used the markdown text (made direct or from a conversion like you show) as a template and then used Python’s replace to fill in details for a specific case and produced an actual notebook. Then do the same to produce a similar notebook but with different details. So I can produce a series of similar notebooks where the details inside them differ. Maybe I am making a different notebook for each set of data where the data structure is all similar while the details differ.

I think some people like using the markdown form as a way to track changes in git more easily, see this discussion thread for that and other reasonings for the text-based form. Traditionally places like GitHub and GitLab didn’t do a good job at marking clear diff represenations of notebooks because they are really json underlying and have a lot of meta data that changes with every new version. (This is being improved; I think GitLab announced some special handling they’ve added so when you browse notebooks the diff representation looks nice and clear.) So using the markdown form as the main form of the notebook was a way to more easily track changes.

Markdown at its core is just text. It’s designed to be read easily in text editor just as text but you can also invoke other rendering engines on the text. Most of the README files in my Github repo are markdown. You’ll notice by default that Gtihub renders it fancy. See an example here; you can click Raw button in the upper left and see the raw text. It’s still fairly human readable but doesn’t look as fancy on the page.

$/opt/anaconda3/bin/jupyter nbconvert --to notebook --execute ipy10-LattPara2.ipynb
[NbConvertApp] Converting notebook ipy10-LattPara2.ipynb to notebook
[NbConvertApp] Executing notebook with kernel: python3
[NbConvertApp] Writing 150591 bytes to ipy10-LattPara2.nbconvert.ipynb

I see, it exectutes one ipynb and save results in another ipynb.

1 Like