Passing access_token to pod with custom Authenticator class

I am migrating a JupyterHub project to k8s and am having difficulty passing the access_token to an environment variable in the pod in the way that I did it before. I am successfully authenticating via OAuth to my backend, I just need to get the access_token into the pod.

hub:
    image:
      name: hub_image
      tag: <tag>
    
    config:
      GenericOAuthenticator:
        client_id: <client_id>
        client_secret: <client_secret>
        oauth_callback_url: http://.../hub/oauth_callback
        authorize_url: http://.../public/api/v1/oauth/authorize/
        token_url: http://.../public/api/v1/oauth/token/
        userdata_url: http://.../public/api/v1/oauth/userinfo/
        scope:
          - openid
          - read
          - write
        username_key: username
        auto_login: true
        enable_auth_state: true
      JupyterHub:
        authenticator_class: generic-oauth
        admin_access: true
  
    extraConfig:
      00-pass-auth-token: |
        from oauthenticator.generic import GenericOAuthenticator
  
        class MyCustomAuthenticator(GenericOAuthenticator):
            async def pre_spawn_start(self, user, spawner):
                auth_state = await user.get_auth_state()
                if auth_state and "access_token" in auth_state:
                    c.KubeSpawner.environment.update(
                        {
                            "AUTH_TOKEN": auth_state["access_token"]
                        }
                    )

In my previous JupyterHub project I was able to set c.JupyterHub.authenticator_class = MyCustomAuthenticator in jupyterhub_config.py, do I need to set the authenticator_class to point to MyCustomAuthenticator somehow?

Thanks

I guess you should have something like : (you shouldn’t forget the .py to 00-pass-auth-token, I think)

extraConfig:
  00-pass-auth-token.py: |
    from oauthenticator.generic import GenericOAuthenticator

    class MyCustomAuthenticator(GenericOAuthenticator):
        async def pre_spawn_start(self, user, spawner):
            auth_state = await user.get_auth_state()
            if auth_state and "access_token" in auth_state:
                c.KubeSpawner.environment.update(
                    {
                        "AUTH_TOKEN": auth_state["access_token"]
                    }
                )

    c.JupyterHub.authenticator_class = MyCustomAuthenticator
    (...)
1 Like

Tried it, the code block is running but my pre_spawn_start() is still not being executed

edit: Actually, my pre_spawn_start executed exactly once and the token showed, but then not again on any of my other manual tests?

edit2: Okay, seems I’ve narrowed it down a bit. If I make any changes to the code block (even just adding a comment) and run a helm upgrade, then the token doesn’t show up the first time I do a helm upgrade. However, if I delete the pod and log in again then the token does show up for every login until I run another helm upgrade (whether I delete the pod again or not). I am testing this in a minikube environment and am relatively new to k8s and helm, don’t know enough about what’s going on under the hood to make a guess at why this is happening yet.

You might add an else branch with some debug logging, since there could be state you don’t expect, e.g. auth state might be empty or disabled.

Instead of c.KubeSpawner.environment try modifying spawner.environment instead: Authenticators — JupyterHub 2.3.1 documentation