Binder failed to launch: User already has a server running



I have authentication running on my BinderHub, my question is what to do about the following message? This seems to be occurring when a user is launching Binder instances in quick succession, but I feel like the user should be able to leave the Binder homepage open after they’ve authenticated and keep relaunching a Binder should they choose, or a different Binder repo. At the minute, they would have to wait for the culler to kill their user pod before they could open another Binder instance, or stop their server from the Control Panel in the Notebook environment (depending on the setting of cull.users, I’ve tried playing with True and False values). Any advice?

Found built image, launching...
Launching server...
Launch attempt 1 failed, retrying...
Launch attempt 2 failed, retrying...
Launch attempt 3 failed, retrying...
User <username> already has a running server.

Config below:

    use_registry: true
    image_prefix: <image-prefix>-
    hub_url: http://<jupyter-ip>
    auth_enabled: true

    users: false
    every: 660
    timeout: 600
    maxAge: 21600

        oauth_redirect_uri: "http://<binder-ip>/oauth_callback"
        oauth_client_id: "binder-oauth-client-test"
      hub_extra: |
        c.JupyterHub.redirect_to_server = False

      binder: |
        from kubespawner import KubeSpawner

        class BinderSpawner(KubeSpawner):
          def start(self):
              if 'image' in self.user_options:
                # binder service sets the image spec via user options
                self.image = self.user_options['image']
              return super().start()
        c.JupyterHub.spawner_class = BinderSpawner

    cmd: jupyterhub-singleuser

    type: github
      clientId: "<redacted>"
      clientSecret: "<redacted>"
      callbackUrl: "http://<jupyter-ip>/hub/oauth_callback"

I’ve never run a server with auth, so wild speculation ahead: I think what you diagnosed is exactly right and probably the code doesn’t contain any “shutdown and then launch new repo” logic. I think it would make sense to add that.

Orr, investigate “named servers” in combination with auth. Using the “name of the repo” as the name for the server and let people start lots of servers in parallel. Which would be similar to how an anonymous BinderHub works at the moment.


This is how I ended up with cull.maxAge set so low as I regularly launch the same repo when changing config.yaml to check everything is still working. But sometimes I think the Hub gets confused if a pod with the same repo URL is still running so I want to clear out those pods as quick as possible while I’m testing things. And kubectl delete pod feels a bit renegade.

“Named servers” definitely sounds like the route I want to take for the time being, as there’s no persistent volume claim associated with the user (yet). Auth is only to determine who can have access to the Binder launch page right now. (This of course may all change!)


delete first, ask questions later. Not sure this is the official guidance for ops but close to it :wink:


Easier to ask forgiveness than permission!

Are there any docs on named servers or shall I dive into the source code?


I’ve reassigned this topic as JupyterHub as I probably need to track down where JH names users pods in the Helm Chart.


@sgibson91 not sure if you’ve found a solution to this, but there is a pod_name_template configuration variable you can set for KubeSpawner. See e.g. and

Also, note that if you use kubectl delete you still have to either wait for JH (and the proxy) to clean up their internal state (I think by default this takes 5 minutes but I could be wrong) or restart JH.

1 Like

Thank you @rokroskar, I will look into this!

Yes, I’ve now made myself admin on the JH so I can stop servers there. Still very manual but seems to trigger the JH and proxy to update their state more consistently.


Hi @rokroskar, I’m just getting back to this as I got distracting squishing other bugs. Could you provide some advice on how to begin implementing this please? Under which key in config.yaml do I implement the pod_name_template and how do I parse the repo/image name instead of the user name? Many thanks!