Binderhub lets me define a repo, which may be a kernel defining repo, and create a build against it that eg lets me open a notebook against that (dynamically built) kernel.
But what if I have my own notebook server and just want to connect to a remote kernel built from that repo?
What I’m wondering is, is the notion of a KernelHub meaningful, where eg you pass a git repo URL, KernelHub builds and runs the repo as a kernel process and gives you a connection string back that you can use to access the kernel, as a remote kernel, from your own notebook server.
(One of the things I’ve noticed about installing custom kernels is that they may have a stack of other dependencies that are required and for which installation guides are less than helpful…)
(Thebelab et al are client packages that call on a kernel running via Binderhub. Do they need the full machinery of all the stuff Binderhub builds into its images?)
ah - I guess my question then is: what do you mean by “just a kernel”? I thought you were asking for arbitrary binder-like environments running via a text interface in the cloud.
By “kernel”, I guess I mean a minimal environment that includes just the Linux and language specific packages to execute the programme language commands that I would be able to execute in a notebook code cell associated with that “kernel”.
So eg for gnuplot kernel, that would be gunplot and its dependencies, metakernal / gnuplot metakernel and its dependencies, and whatever other dependencies are required so that I could fire up a container containing that stuff and connect to it, as “a kernel” from something like ThebeLab.
I think in this case BinderHub is actually a decent choice - if you specify a minimal number of requirements, it basically just uses a barebones ubuntu environment which is quite lightweight, then spins up a jupyter server so you can connect with it. I think you can do the same thing w/ JupyterHub too (that’s what binderhub is using under the hood)
I’d start prototyping this with a JupyterHub or BinderHub. Also investigate kernel gateway as it sounds like it should be made for this. I’ve never found time to look at it and through all the enterprise talk can’t quite work out what it does.
suppose I’m running Jupyter notebook locally on a Windows machine. I want to run a particular thing that requires a particular Linux package. There’s a Binderised repo that does the job. Can I add a repo as a (remote) Binderised kernel to my notebook kernel list and run notebooks against it?
(This is perhaps also relevant to the Juniper and Carnets iOS clients?)
suppose further that I’m writing some notebooks where the markdown analysis / evaluation is sensitive even if the code / code outputs aren’t (maybe because they lack context, or because the data is only meaningful / sensitive only if I know who / where / what it relates to). If I can run my notebook code against the remote kernel running on MyBinder, will the notebook be saved to the MyBinder container or is the only transport the code cell code to be executed and the execution response?
My understanding here is that only the code and output are sent to the kernel, assuming the kernel is different from the notebook server. All contents are sent to the notebook server.
There are probably better ways of doing this, but I was trying to think something through for myself about what having this sort of client to hand might make possible!
Another take on launching and accessing a remote MyBinder kernel, wrapped up in hacky magic…
It works more or less just enough to POC how this sort of thing might be used, as long as you set your expectations quite low…
I’m guessing there’s a way of connecting to and conversing with the kernel using code from the jupyter notebook/client code, but I was riffing on a code fragment from Sagemath that I’d previously had working…