- people are starting to put demo notebooks in their package repos;
- some people are starting to binderise their package repos so people can try out the package using demo notebooks;
- some people have unit tests for packages as part of their package repos;
- some people use CT (Travis, circleci, etc) to run tests automatically over their package repos;
- some people have binderised notebooks in their package repos that are broken (which doesn’t help demo the package); or binderised notebook generated docs that don’t work properly;
- people creating their first package may well want to show it off using demo notebooks that run properly in Binder but have no idea about writing tests, let alone running them under CI (I put myself in this category!)
So I wonder about a pattern for using Binder that:
- in the first instance encourages people to write demo notebooks that show off a package and that provides de facto tests of some elements of the packages;
- in the second instance can be used as part of a manually operated test framework, eg using an approach similar to nbval (or are there other test approaches for testing correct running of notebooks);
- in the third instance could be used as part of an automated test framework?
This could provide a form of “literate testing”? IIRC it also complements an approach I heard mentioned in this podcast somewhere (?!) where if scheduled notebooks failed to run correctly, engineers could look to the run notebook as an error log and spot the part that had failed?