Hello all! We currently run jupyterlab as a headless server with some base kernels but allow the install of various other kernels via conda environments. We’d like the ability to also reference remote kernels in addition to local kernels.
Example use case is I have my JL server notebook running, I’ve installed several other kernels, and I now need the horsepower of a GPU kernel that I’d like to connect to on-demand. This GPU kernel would ideally be able to serve multiple NBs via the same gateway URL connection so that resources can continue to be utilized sufficiently
From looking at enterprise gateway it looks like if I pass in a URL gateway to my JL server on start, I will only be able to talk to kernels that are on the remote machine and none of the local ones installed. Is that a correct assumption?
Is there a way to achieve the behavior I’m looking for with the capabilities currently available? I’ve also seen the more recently released Gateway/Kernel Provisioners.
I saw this a couple Q’s posted as well which seem to ask the similar questions, but no one’s replied:
Yes that’s correct, at least based on my experience. Once you configure jupyterlab (I’ve specifically used lab 3.x and jupyterserver 1.x) with --gateway-url, all kernel requests and interactions go through the gateway server. I believe provisioners are attempting to solve the use case of local + remote kernels but I haven’t tried them myself. Outside of them, you could do something like Google does and multiplex the requests to local jupyterserver + remote Enterprise Gateway (ref).
If you do implement either approach I’d be very curious to hear how it goes!
Hi, we have developed at Datalayer a solution to solve that exact problem, allowing to use at the same time local and remote kernels. You can even mix on a notebook local and remote kernels per cell (1)
You can read more context on (2). The documentation is on (3) and you can register to the waiting list (4) to get free credits to use remote CPU and GPU kernels. We are launching in Private Beta tomorrow!
If you are looking for a pure opensoucre solution, you may be interested by jupyter-react. We have just merged a PR that allows to mutate a notebook from static, to pyodide (browser kernels), to a classic jupyter kernels, to a more power Datalayer kernel (5). You can build your own webapp or JupyterLab extensions with the jupyter-react open source components.
PS The docs to deploy your own remote kernel on kubernetes is on https://datalayer.tech - You can see that as services that maintain Jupyter Kernels (CPU/GPU) that can be consumed from JupyterLab, CLI or VS Code.
Hey @echarles would you guys mind sharing a bit about you are achieving both local and remote connectivity? Is it mostly using a combination of kernel and gateway provisioners, the enterprise gateway, or is it something completely in house?
Hi @waaffles We have started trying to reuse the existing Jupyter ecosystem components and came to the conclusion that they had not been designed to support such cases and that it would be really too hard to let them evolve. So we implemented something from scratch being cloud-native by design.
Got it. Unfortunate that the current ecosystem was not enough to achieve a similar goal. You guys built something really cool over there at datalayer! Thanks for sharing Eric. Whatever other information ya’ll can share as far as accomplishing local and remote kernel connectivity would be helpful