Jupyter Notebook Sockets

I wanted to share some discovery work I did this weekend about using the --sock option. Basically, if you start from this pull request example: Add UNIX socket support to notebook server. by kwlzn · Pull Request #4835 · jupyter/notebook · GitHub it might not work for you. The reason (for me) was that I needed to add the -T flag to disable trying to allocate a pty (for me it was aborting). But this worked (running a notebook from a login node):

# Running on login node
$ ssh -NT -L 8888:/tmp/test.sock user@server

But then the tricky part was getting this to work on an isolated cluster via a login node (and again we need sockets because no ports are exposed on job or login nodes), and ultimately I came up with:

$ ssh -NT -L 8899:/home/dinosaur/login-node.sock user@server ssh isolated-node -NT -L /home/dinosaur/login-node.sock:/home/dinosaur/jupyter.sock

Which forwards the job node socket to the login node, and then that new socket to my host. Note that my tool running these needed to clean the .sock files between runs, and (given a singularity container) make sure you bind jupyter home to jovyan’s home, along with setting the --home and the python .local folder (ping me if you want more details).

If anyone needs to debug sockets, what I found useful was doing nc -U /path/to/test.sock to test my permission (as a user from a location) to connect. If you want more verbose detail, see the thread and post (linked from it) here:


Does treating the login nodes as a proxy/jump host let you skip the intermediate login-node.sock?

1 Like

Oh indeed that might! I’ll give this a test in a coming weekend. Thank you @manics !

One quick question! Given that we cannot know the name of the job node until it’s hit, how could we pursue this approach given that there could be thousands of potential worker nodes (that we don’t want to enumerate in text?) Is this maybe intended for more proxy setups to reliable (single) host-names? :thinking:

I don’t think you can. If your HPC cluster is highly restricted then you’ll have to wait for the scheduler to allocate a node to you, then setup your tunnel separately.

Ideally you’d install JupyterHub (or an alternative), and use that to spin up Jupyter notebook/lab servers with GitHub - jupyterhub/batchspawner: Custom Spawner for Jupyterhub to start servers in batch scheduled systems

The proxy jump solution totally worked! For others that are interested, I did:

$ ssh -J user@server <machine> -NT -L <port>:/home/user/path/to/worker-node.sock

Basically, have the main login “jump” to the machine, and provide the command to direct the worker node socket directly to the port. This is great - it doesn’t require that second mapping of the socket to the login node. Thank you @manics !