Canonical way of confidentially storing user credentials for databases in JupyterHub

I am deploying JupyterHub for multiple users in my firm, each user has separate database credentials. I want to use these credentials in the spawned JupyterLabs. What is the canonical way of doing this? I don’t want to store plaintext credentials on drive, and user environment variables are not passed through to Jupyter notebooks efficiently (see my other question) so that’s out of the question too.

So can I “tap in” to the PAMauthenticator, for example, to get a hashed Linux password in order to use that as an encryption key? How do I go about this job of authenticating each user to outside services, without storing plaintext credentials in their home directories?

The outside service, in my case fwiw, is InfluxDB. Ideally I want a user’s credentials “unlocked” when they’re PAM-authenticated by the hub.

It pretty much depends on the spawner you use. For a better understanding, please share your configuration (preferably a minimal example).

The spawner often allows to execute some code before the user notebook ui is spawned. You could use that hook. I believe a more canonical approach would be to use something like vault so that no password but just a token is stored on the system.

I am using the a completely standard installation, so whatever the default spawner is for multiple users of a Linux system (PAMauthenticator) is what I am using.

You can use auth_state to store objects that can be passed to the spawner:

Do I have to write a custom authenticator though? Or can this be used in conjunction with the default Linux PAMauthenticator? Ie can I inherit the PAMauthenticator somehow and only override the .authenticate() method somehow? Or will I have to write all the authentication plumbing from scratch in order to achieve this?

You can inherit from an existing authenticator and override just the methods you need.