Build completes successfully then "Binder Inaccessible"

Any assistance with what the issue is here will be appreciated. Thanks.

With repo

Successfully Built

Then redirects to

Here is the link to launch a new Binder for this repo:

What you see with the build working not the notebook not coming up is the classic outcome when you try to use the strongly discouraged Dockerfile route and don’t strictly follow the steps detailed here.
Fortunately, the steps executed via the Dockerfile are all easily accomplished using the abilities available in the standard configuration files recommended for use with MyBinder launches.

I forked your repo and made a version using standard configuration files.

Try it out by clicking here to launch and then opening the ’ ACT_Conveyance_Duty.ipynb’ notebook and running it.
You’ll see it works now with the only oddness being a warning about included file "count.mzn" overrides a global constraint file from the standard library. This is deprecated. Importantly, the basics are working now in sessions served by without need for a Dockerfile.

Fork is here.

If you look around in it, you’ll see everything I changed is put in a binder directory. The MyBinder service will use configuration files in such a directory preferentially, and so it allows leaving your Dockerfile in root alone. I thought maybe that Dockerfile works for you somewhere. I copied yours into binder to modify it; however, I soon abandoned tinkering with it as I figured it looks simple enough to do by supported methods. It still is in there renamed as so not to interfere.

One choice I made was to go with using a current version of minizinc built from source using postBuilt. I was having trouble getting what I think is an old version that is at apt-get to work as your code seemed to expect. Maybe it isn’t as old as I think? Copyright indicated it may date to 2018 unless updating that part of the code was an oversight. Anyway, that may be a further option to adjust if you want. It does add significant time when it has to build the image.

Another place that could use further addressing is what I did to get optimathsat to work. I sort of tried a few things and left the code cluttered. As it is now in postBuild, adjusting anything there results in running compiling from source and so it is not directly convenient to play around unless I remove the compiling for some time.
Also I tried to add gecode before compiling so I could set -DGECODE_ROOT=; however, I haven’t tested that actually worked.

I really appreciated your detailed answer, and read it carefully since I met the same problem.

I am in the situation where I would like to share a notebook that demonstrates the usage of a binary wheel package that I have made (xdyn, lightweight ship simulator modelling the dynamic behaviour of a ship at sea, can be seen on gitlab dot com sirehna_naval_group/sirehna/xdyn)

There is a docker image sirehna/xdyn-ubuntu2204-py310:v6-2-2 that contains ubuntu 22.04, python3, the wheel package and the installation of this wheel package. I thought I just had to use this image and install the required packages. But all my 20 trials failed. I have also published the binary wheel package on pypi: It is for Python 3.10 Linux the desired package, but could not use it with conda (I am not an expert…). I don’t see how postBuild strategy could work for me

All my attempts are stored in this GitHub repository: GitHub - Gjacquenot/test_binder_notebook

What I tried

  • many many Dockerfiles,
  • only a requirements.txt file → Did not work since
  • environment.yml → Did not work since conda does not know that it should use channel pypi

Below is my multistage build docker with informations I have found on binder dot org

Local run ask for a password, may it be something related to on mybinder dot org?

FROM sirehna/xdyn-ubuntu2204-py310:v6-2-2 as xdyn_source

FROM jupyter/minimal-notebook:notebook-6.5.2

USER root
RUN apt-get update && \
    apt-get install --yes python3
RUN python3 -m pip install --no-cache-dir notebook jupyterlab

RUN  python3 -m pip install numpy matplotlib
COPY --from=xdyn_source xdyn-6.2.2-cp310-cp310-linux_x86_64.whl ./
RUN  python3 -c 'import numpy'
RUN  python3 -m pip install --no-index --find-links ./ xdyn \
 && python3 -c 'import xdyn; print(dir(xdyn)); print(xdyn.__version__)'

COPY Untitled.ipynb /home/${NB_USER}
USER root
RUN chown -R ${NB_UID} ${HOME}

@fomightez thanks for your reply.

I’ve decided to go down the route of publishing my image as a github package and letting anyone who wants to run it, pull it into Docker.

The main factor was that the docker image option is not a preferred option for

All the best,


Is that file xdyn-6.2.2-cp310-cp310-linux_x86_64.whl available somewhere public online where one can use wget or curl in a mybinder-spawned session to get it? Or can you make it so it is? Other than that, it just seems you need numpy and matplotlib. So a simple requirements.txt should handle those two packages with the addition of a postBuild to handle getting the .whl file and installing it with pip install.

Thanks for the quick answer

Binary wheel file is available at here

I will try the postBuild solution!

The postBuild was easy to setup. However, I need to provide a binary compatible wheel, with Ubuntu 18.04 LTS The Bionic Beaver. My wheel file was created with Ubuntu 22.04 LTS The Jammy Jellyfish. Thus not working… I will try with with another wheel file…

Step 43/48 : RUN ./binder/postBuild
 ---> Running in 2a7ed9d8f6e5
--2022-12-29 19:42:01--
Resolving (,,, ...
Connecting to (||:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8204568 (7.8M) [application/octet-stream]
Saving to: ‘xdyn-6.2.2-cp310-cp310-manylinux_2_31_x86_64.whl’

     0K .......... .......... .......... .......... ..........  0% 3.35M 2s
  8000K .......... ..                                         100% 55.0M=0.2s

2022-12-29 19:42:02 (49.1 MB/s) - ‘xdyn-6.2.2-cp310-cp310-manylinux_2_31_x86_64.whl’ saved [8204568/8204568]

ERROR: xdyn-6.2.2-cp310-cp310-manylinux_2_31_x86_64.whl is not a supported wheel on this platform.
Removing intermediate container 2a7ed9d8f6e5
The command '/bin/sh -c ./binder/postBuild' returned a non-zero code: 1

Your solution is to build your container with GitHub actions, push it to repo and binder knows that it should use this image? Is that right?

A little less ambitious than that. After building the image with a Github action, and pushing it to I leave it there, for anyone to pull.


Understood. I made the same with the ship simulator xdyn. I want to share tutorials with the Python API. However, for my case, it is not that easy to use binder


1 Like

I have often found the time and effort spent making something work in a very specific case, with very specific docker file, once, for a demo, can be spent to get a more portable, sustainable build solution with conda-forge that folk could use to actually build things.

The approach here would be to get the package built (from source), and uploaded to conda-forge, and then in all likelihood it would “Just Work” on binder. Conda-forge brings its own, somewhat opinions about compilers, etc. which can initially be a pain, but the end result is a (usually) more harmonious whole. Then, as new versions of the package-of-interest, or related things, come out, a small army of bots helps improve the “feedstocks,” so that everything keeps working together: examples include new platforms (apple M1) or CPythons (3.11).

The first step to the above (or any non-PyPI distributionb) would be to also package a “canonical” .tar.gz sdist next to that .whl on PyPI, and then start the staged recipes process.


I have found a Dockerfile that works with binder! This was what I was looking for.

I just made a fork of this project, and added my dependency that is only published on for time being. And it just works…

I understand that conda-forge allows to easily handle several Python version, but for a proof of concept, or a tutorial, a single working version that is shipped in a Dockerfile is also fine…

1 Like