Trouble spinning up a mybinder session

I’m using mybinder.org to serve interactive tutorials for students in a class at my workplace. Here’s the mybinder link: Binder

For the past month, I’ve had various steps fail on almost all of my sessions, and maybe 10% of student sessions. A common error log in many of these cases looks like this:

2022-05-17T23:50:01Z [Warning] Failed to pull image "3i2li627.gra7.container-registry.ovh.net/binder/ovhbhub-tc-init:2020.12.4-n655.hfe65496": rpc error: code = NotFound desc = failed to pull and unpack image "3i2li627.gra7.container-registry.ovh.net/binder/ovhbhub-tc-init:2020.12.4-n655.hfe65496": failed to resolve reference "3i2li627.gra7.container-registry.ovh.net/binder/ovhbhub-tc-init:2020.12.4-n655.hfe65496": 3i2li627.gra7.container-registry.ovh.net/binder/ovhbhub-tc-init:2020.12.4-n655.hfe65496: not found
2022-05-17T23:50:02Z [Warning] Error: ImagePullBackOff
2022-05-17T23:50:02Z [Normal] Back-off pulling image "3i2li627.gra7.container-registry.ovh.net/binder/ovhbhub-tc-init:2020.12.4-n655.hfe65496"

This error presents in multiple browsers (Chrome, Edge), but it’s intermittent. Often the Binder session starts right up.

I’d like to know if there’s a way I can configure my browser, internet connection, or the source repo to make the user experience more consistent. Let me know if I can provide more details.

Thank you!

Probably not. There’s finite resources for the MyBinder.org free service. That’s probably why it is intermittent.

If you wanted something more controlled you could share remote computer instances with Jupyter and your preferred environment installed. Or a JupyterHub, like The Littlest JupyterHub depending on the size of your group.

2 Likes

Thanks for the heads up. How can I tell whether this is a service failure from the error message? And how can I square that failure with the good (?) status I always see on the mybinder.org federation status page?

…for example as of a minute ago, three of the four deployments are up, and all of those have more user pods available before they max out their quota:

image

Usually if the repo has built an image and successfully launched in the past and nothing’s changed in the repo, then anything not working is out of your control. The indicator page you show will definitely clue you in to major issues. However, there’s even more finite resources hidden in there such as how many images can be pulled from DockerHub. And I’ve seen that communication be an issue at times.

Sometimes there’s delays as pods spin up or go bad, too.

1 Like