You can trigger the building of a repository each time something is merged into the master branch of your repository.
There are also some tricks to how you write your
Dockerfile so that rebuilds (where only some things have changed) are faster. I think the general principle is to install the things that take longest to build first in your image. Things that are fast later. For example installing and compiling some big package is better done at the start of the image and copying over notebooks/README later. That way the layer containing the expensive to build thing is reused when you change the README.
Can you explain a bit more what you mean? You shouldn’t need to build the image on each node. Once a particular image has been built all other launches of it should pull it from the docker registry of your BinderHub.
On mybinder.org we use the “sticky builds” feature of BinderHub. In a normal BinderHub a build is assigned to a “random” node in the cluster. With sticky builds enabled BinderHub tries to assign builds of the same repository to the same node to increase the chances of having shared layers in the docker cache.
Another thing we found on mybinder.org is that people will re-re-re-re-re-build their image a lot at the start, but then after a few days of development stop changing it quite so frequently (time between changes becomes >> time to build). The recommendation right now is to use repo2docker or
docker build locally for fast paced development as it will run more quickly and you have easier access to logs etc.