Jupyterhub custom kernels gets removed after container restart

Hi team,

I have configured jupyterhub and notebooks on containers. When user login into jupyterhub a new container will spin up for that particular user and they can work on it. User create new custom kernels for their work and add dependencies according to them but when jupyterhub server stops or restarts, all the new custom kernels will get removed and user have to setup the whole kernel again. Is there any way we can save those kernels somewhere so that it won’t get affected if any restart happens.
Thanks in advance,
Sanchit Aggarwal

There are several ways, depending on how you’re running your containers. For example, Dockerspawner has an option to stop but not delete containers so changes to the filesystem will be saved. Other container spawners allow you to mount a volume for persistent data. If that volume is mounted as the user’s home then as long as all changes are kept to the home directory they should be available when the container is rebuilt.

Hi manics,
Thank you for your response.
What is happening whenever any new user login in jupyterhub a new container spin up for every new user. I am new to jupyterhub. Could you please share some documentation, how can I do this dockerspawner or volume mounted. Could you share some documentation?

These are the docs for DockerSpawner:
https://jupyterhub-dockerspawner.readthedocs.io/en/latest/

I have tried that dockerspawner.remove = False. But it is not working.
I am using EFS for mounting purpose and mounted user’s home dir (/home/jovyan) but that is also not working. Mounted command – c.DockerSpawner.volumes = {’/mnt’: {‘bind’: ‘/home/jovyan’, ‘mode’: ‘rw’}}
The new envs and files created at user’s home is not available when notebook server restarts.
Is there any other way to do so or I added something wrong?

I have also tried dockerspawner.remove = False but not getting what I want.