I’m using and enjoying jupyter notebooks since several years, but only in single user environments.
Now I’m in an use case where I need JupyterHub to spawn user centric instances.
Following you’re guides I managed to use KeyCloak as authentication provider for login, which is quite nice.
What I don’t understand is the following, shown in the graphic:
My business application and JupyterHub share the same authentication provider (KeyCloak), so once I logged in to one of them, I can directly switch to the other one without being forced to login again.
I’ve seen the documentation of
pre_spawn_start to forward a bearer token to the singleuser server:
That is nice to initially forward the information, but when the token (and refresh token) expires, because I’m just using data and performing calculations based on the data without interacting with my application backend or keycloak directly, there is no direct way to refresh my token.
The clients for OpenID Connect / OAuth2 I’ve found all require a client_secret or a web interaction to authenticate. Nothing I would do from within my python scripts.
What I’m searching for is a possibility to make JupyterHub/public-proxy/api refreshing my auth_token from keycloak as soon as it expires. So that my python scripts within the singleuser notebook server can fetch a valid token from the api to communicate with the application backend.
The idea of forcing my users to reauthenticate against keycloak before communicating with the application backend, while they are still using jupyterhub (which depends on keycloak auth), but do not directly communicate with the application backend which would ideally refresh the token, is something I’m trying to omit.
Thanks in advance for an hint,