people are starting to put demo notebooks in their package repos;
some people are starting to binderise their package repos so people can try out the package using demo notebooks;
some people have unit tests for packages as part of their package repos;
some people use CT (Travis, circleci, etc) to run tests automatically over their package repos;
some people have binderised notebooks in their package repos that are broken (which doesn’t help demo the package); or binderised notebook generated docs that don’t work properly;
people creating their first package may well want to show it off using demo notebooks that run properly in Binder but have no idea about writing tests, let alone running them under CI (I put myself in this category!)
So I wonder about a pattern for using Binder that:
in the first instance encourages people to write demo notebooks that show off a package and that provides de facto tests of some elements of the packages;
in the second instance can be used as part of a manually operated test framework, eg using an approach similar to nbval (or are there other test approaches for testing correct running of notebooks);
in the third instance could be used as part of an automated test framework?
This could provide a form of “literate testing”? IIRC it also complements an approach I heard mentioned in this podcast somewhere (?!) where if scheduled notebooks failed to run correctly, engineers could look to the run notebook as an error log and spot the part that had failed?
We started work on building examples of using repo2docker as part of your CI pipeline in https://github.com/binder-examples/continuous-build/. It is a pretty sophisticated (some would say complex) setup right now. I think having simpler ones would be great as that would help people to get started.
A super simple thing I have started doing is to run repo2docker . papermill a-notebook.ipynb in a repository that has notebooks that should run on mybinder.org and I want to (somewhat) automatically check if they run to completion.
The repo2docker part means I know the repo will still build on mybinder.org, then I run the notebook inside that container with papermill a-notebook.ipynb which lets me see if there is an error or not.
@betatim this also seems relevant to the binder conversations around a ‘verify’ file that would serve as a kind of test-suite for reproducibility purposes…
Nods. In the context of reproducibility (and creating a JOSS like journal with a notebook as submission instead of a paper.md) I am starting to think we should look if there is a paper.ipynb in the repo and if yes choose that as “the way to verify”. It gives you a document that is meant for producing human readable documents from prose and computation as well as all the flexibility of a script (via the ! magic).
This means that a service that verifies that your repo runs on a BinderHub (or is minimally reproducible) would run repo2docker https://zenodo.org/<id-here> papermill paper.ipynb and check what the exit code is.