How can I support mybinder.org?

How can new groups help maintain mybinder?

What can a digital library, or a project running on a university cloud (like the Massachusetts Open Cloud) do to support mybinder?

5 Likes

It makes me super happy that you ask :slight_smile:

Off the top of my head there are three ways to support mybinder. They aren’t exclusive categories (a little in each column works).

  1. Become part of the mybinder.org federation. The service relies on donated compute power, our current plan for making that sustainable is to be able to run on any cloud (GKE, OVH, AWS, Azure, bare metal, etc) and be able to spread the traffic across many sites. This means that small and large donations of compute are useful and credit for that donation can go to the donors. We are happy to expand our “federation” when we can.

  2. Contribute directly to the project(s). We rely on many fantastic open-source tools and projects. Many of which are substantial projects in their own right, helping them helps Binder. We also try to cover more than just Python and Jupyter in terms of use-cases. This means there is lots of work at all levels of experience and a broad range of topics (explaining ideas, technical writing, coding, user experience, devops, shaping feature ideas, etc). It also means we need people who use the tools beyond Python and Jupyter to get involved so that we can continue with our strategy of meeting users where they are instead of prescribing a way of doing things (“automating existing community practices”)

  3. Tell and teach people about Binder and all the things around it. If more people know about the project, then more people use it. This is a good thing because it is a useful tool that lets people do things that used to be hard/impossible to do before. Binder is a good solution to some problems, and a lot of people who have these problems don’t know Binder exists. This needs fixing :slight_smile: Furthermore good explanations and education about a tool like Binder and the ideas/concepts involved is hard work. A side-effect of more users is more ideas of what people actually need and that more people with skills and time contribute back to the project.

I could go on for ages but i will stop to see what your reaction is and which avenue we should focus more on. And if none of these sound exciting/a good fit, there are fourth, fifth, sixth avenues :smiley:

9 Likes

Great. I’d like to get the MA Open Cloud to join the federation. Where should I point people, to describe what would be involved? This thread offers some context, but a more explicit how-to w/ tiers would be very helpful:

2 Likes

The (too?) short answer is: the process is still very informal. A good first step is opening an issue on https://github.com/jupyterhub/team-compass this will get more of the team involved, most discussions about current federation members happened in issues there so it is easier to link to them and a lot of this is based on trust between all the involved people. There is a monthly team meeting open to everyone (it would be a good place to talk about this https://github.com/jupyterhub/team-compass/issues/297 unfortunately the June meeting is Asia timezone friendly/early Europe time which might make it hard to attend from a Americas timezone)

There are currently four members of the federation. For each it was different/personal how we ended up having them join the federation. Every time we do this we try and get a bit more formal and repeatable about it.

Most members of the federation (GKE, OVH, Turing) provide us with a bare kubernetes cluster on which we deploy a BinderHub from https://github.com/jupyterhub/mybinder.org-deploy/ (via travis). This helps keep the clusters in sync, shared config, etc. We can kubectl ... on all these clusters to take care of admin things. On GKE and Turing we have full control, for OVH the OVH team takes care of provisioning nodes and such.

The Gesis cluster is managed independently via https://github.com/gesiscss/orc. @arnim and @bitnik work for the Gesis institute which sponsors this bare-metal cluster. They take care of everything, including keeping configs and versions in sync (lots of it automated), running the hardware, network, etc.

Either approach works for us, they each have pros and cons.

Some related links:

4 Likes