How to Version Control Jupyter Notebooks

Hello all

I thought a few of you might be interested in this overview, How to Version Control Jupyter Notebooks. I try to strike a balance between writing something detailed but includes a broad number of tools. Feedback and discussion welcome!




If I intend to publish the notebook ‘directly’ (nbviewer) with intact output cells. I tend to use print(…) and save charts as external PNG and then insert those external images. Both keeps the notebook files more manageable by git, and especially the diffs quite clean.

And “restart kernel & clean” followed by “run all” before a commit. That adds the insurance that you don’t have added any state / order problems, unlike just cleaning the outputs.

Great summary. Chiming in wrt my workflow (hundreds of notebooks used for research, collaborative development of university courses, graduate student projects etc.) I’ve found jupytext to be transformative. A couple of comments from my own experience:

  1. I keep the python version and the json version in different folders. This makes diffing, grepping, adding files to the git index etc. easier. The ipynb files reside one level up, but aren’t committed to my git repo. My global looks like this:
c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager"  # noqa
c.ContentsManager.preferred_jupytext_formats_save = "py:percent" # noqa
c.ContentsManager.default_jupytext_formats = "ipynb,python//py" # noqa
c.ContentsManager.default_notebook_metadata_filter = "all,-language_info"
c.ContentsManager.default_cell_metadata_filter = "all"

This passes almost all metadata and pairs mynotes.ipynb with python/, creating the python folder if it doesn’t exist.

  1. I use the pre-commit package to run black, reorder-python-imports and flake8 on every python file, which is obviously a major improvement for informative diffs.

  2. For those of you using nbgrader-- you’ll need to disable jupytext when you convert source notebooks to student released versions. You can do that by editing the formats metadata string for that notebook to “ipynb” only.

  3. Here’s an example of a set of teaching notebooks with their py:percent counterparts: EOSC 213

  4. Bottom line – jupytext is the missing piece I’ve been looking for since I started working with IPython notebooks


For me nbdime ( works extremely well especially if you set it up so that git uses it for rendering diff’s between notebooks.

The only time people I work with run into trouble with this is when they attempt to put too much code into a notebook. However I consider this a feature not a bug as the advice then is to put that code in a .py file instead. This keeps the notebook focussed on explaining stuff and we get real IDE features for the code.

1 Like

Do you just do the reset/run/commit every time or have you written a script to accomplish this? It’s simple enough, but I’m sure I could still somehow mess this up.

Since I add -p on principle there is no persistent messing up.

And nbdime is preferable to ReviewNB because you want to use git and not GitHub or some additional flexibility? The ReviewNB folks just added GitHub commenting as part of their product, which looks pretty slick.

@schmudde, any thoughts on how to manage NBs in pull requests?

I’m reviewing a PR in BitBucket; it doesn’t render NBs nicely and therefore I end up scrolling up and down the page. What are the options in the case? Should I review the PR locally and not in BB?

I using GitHub, I’d highly suggest ReviewNB. It allows you to see notebook diff for any commit or pull request. But that’s a GitHub plugin and you said you’re on BitBucket.

NBDime also offers version control integration, but with Git and Mercurial only.

What did you end up doing? Visual diffing locally seems like a manual process - but perhaps the only one available in this instance.

I ended up excluding .ipynb from the diff page, it’s still in Beta (BitBucket Labs) at the moment.

Hi! We’ve been building automatic version control support for all Jupyter notebooks. It’s basically an addon (that you can download and use for free) that adds a button to your notebook. The button a) runs your experiment on your cloud instance of choice (AWS, GCP, Azure) and b) stores a snapshot of the notebook so you can revert to any experiment you did 5 minutes ago or 5 years ago – including the notebooks outputs (e.g. if you plot some graphs).

Screenshots, videos, descriptions and instructions here:

Please add feedback!

1 Like

Sorry for (kind of) marketing plug here but I think it will be interesting to you.

We’ve recently built an extension to jupyter-notebooks and jupyter-lab that lets you version checkpoints by clicking a button and then you can browse versions and diff easily:

and I can easily share it with anyone by sending a link like this one.

We’ll be adding new features around this like commenting and stuff and I would love to hear what you think and what you would like to see there.
Anyway, I hope this helps.

Finally got to take a look at this. It’s really slick! Happy to see the inclusion of rich media in the diffs. Very cool. I’ll add it to the article.

I’m curious - how does it generate the versions?

Glad you liked it @schmudde!

If you install the extension you will get a new upload button in your jupyter lab/notebook.
Then, whenever you decide to upload a snapshot you click that button (you can name the snapshot and add a description too).

If you want to version your machine learning experiment runs inside of a versioned notebook (a lot of versioning I know), then those snapshots can be generated automatically whenever you run a cell with neptune.create_experiment() in it.
You can see an example of model training in notebook here.

Hi all!

In the spirit of moving this discussion about diffing/merging, version controlling, etc forward, we’re working on a new OPTIONAL file format JEP. Here’s the thread - Proposed-JEP: Investigate alternate, optional file formats

Please come join the discussion!

A thanks for jupytext.
I have a few questions, and I can’t find a better place to ask them.
my setup is jupyterlab on a tljh instance.
I’ve got the *.ipynb in my gitignore so that I only save the *.py files.
now, whwn i pull a new py file, the jupytext does not automatically convert it to the notebook version,
and I need to jupytext --to notebook myself.
even though my .jupyter/ filr has the

c.NotebookApp.contents_manager_class = "jupytext.TextFileContentsManager" c.ContentsManager.notebook_extensions = "ipynb,py"

in it.
that seems to only sync from notebook to script.

am i missing something?

if i generate the notebook with the --to command above,
i need to also touch the …py file.

I must be doing something wrong. because this would all be solved if the autopair worked in both directions.

Here’s a docker image that is behaving the way I expect (i.e. when I create and save a new notebook
both a .md and a .ipynb file are created, and when I change one of them in jupyter the other one
is modified).

Here’s the config file that does that for the spawned notebook:

1 Like

Is there a way to configure Jupyer that on exit it will clean all outputs of teh file leaving it basically with the code itself?

Hi @Royi - this issue might contain the info you need: Suggestion for content: Configuring jupyter to scrub notebook output · Issue #1803 · alan-turing-institute/the-turing-way · GitHub

1 Like

Just to say - like others - I have got completely used to Jupytext, configured to save the notebook as .Rmd (RMarkdown), and I ignore the .ipynb files for version control. The .Rmd files (or your preferred text flavor) are so much easier to diff, and they don’t contain the output.