Hello, how to publish port (same as docker run --publish 9876:9876 ) by using docker Spawner from within jupyterhub_config.py ? Is it possible or not ?
Attempt with extra_container_spec gives unexpected keyword argument 'publish'
How to know (and possibly change ) the keywords allowed for extra_container_spec ?
Can I ask what the goal is of publishing additional ports for user containers? Is it to make some service available to a non-jupyterhub service running elsewhere on the machine or network, but not in docker? Even published ports won’t be accessible via JupyterHub, so usually when folks ask for something like this, the answer is actually something like jupyter-server-proxy.
Yes, the main objective is to allow connections from a remote host to a service running inside the JupyterLab
More precisely I am trying to run Dask with a dask-worker running in a batch system (like SLURM by using SLURMCluster ) and the dask-scheduler running inside a the JupyterLab notebook (inside a container)
Does jupyter-server-proxy could solve such a requirement ?
Other piece of information : a same host will run several JupyterLab (for severals users) and the ports to publish will be defined at ‘spawner time’
That actually would be a case for publishing ports. I think you might want to modify the notebook container. You can try KubeSpawner.extra_container_config which modifies the notebook container where things run. Otherwise, you might need to use KubeSpawner.modify_pod_hook which can be a callable to modify the pod object before it’s created.
Thank you for the tips
I installed jupyter-server-proxy and as I understand, if I run a service inside the notebook in the container, e.g. on port 49999, then it will be accessible via my_notebook_url/proxy/49999 on the web browser, right ?
But how to address the service from an external client (a dask-worker running on a SLURM worker) ? That seems to me impossible. What about authentication ?
I’m afraid to be on the wrong track .
Another piece of information which imply two other questions
By using a jupyter-server-proxy, and having a http server running inside a notebook (inside a container), I can address this http server via a curl -H "Authorization: token ${JUPYTERHUB_API_TOKEN}" https://x.y.z/user/mylogon/proxy/49999
So my previous question related to authentication seems to be solved …
Could someone confirm that a JUPYTERHUB_API_TOKEN env. variable is different for each user ?
Back to my initial concern which is dask, how can I turn this http possibilité into tcp possibilities required for communication between dask-scheduler <-> dask-worker ?
Maybe step back a bit, how would you do this on a shared system without JupyterHub? You could open a port but anyone could connect to it, so how would you setup authentication?