Does nbconvert execute JavaScript/WASM created by the cells?

My cells generate JavaScript, and in another scenario Wasm code, that after execution (of the Javascript/WASM) writes the results to a DIV created by the cell.

That works fine in JupyterLab/Notebook, but when I use nbconvert it doesn’t seem to execute the javascript/wasm (or at least it doesn’t convert its outputs). I’m testing with the simplest output, just changing the DIV.innerText to some values to test.

Is this expected and nbconvert is not meant to support these use cases, or am I doing something wrong ?

Thanks!

ps.: I’m using a Go kernel.

Most of nbconvert works server-side, without a JS runtime (much less a full browser), so a lot of JS stuff will only work if very carefully plumbed (e.g. some Jupyter widgets).

It’s possible some custom template work could make this function for a specific use case, but the general case is… hard.

1 Like

Thanks @bollwyvl , I was half-expecting this to be the case.

I’m trying to write integration tests for the kernel I’m writing, hence I needed the javascript and wasm to be executed, and the results captured somehow – so far nbconvert has worked great.

Any pointers or suggestions on how to orchestrate running a chromium+jupyter and capture the result (export to HTML?) for testing ? (this is way beyond the scope of the original question … so probably should be asked with other tags ?)

JupyterLab and Notebook 7 and JupyterLite use galata, with playwright and its magic instrumented browsers, downloaded at runtime. This approach is likely closer to white-box integration testing, as it requires an exact match of the system-under-test, even though, due to the massive size of things, is generally run from a separate node_modules. This approach is also promoted in the extension-examples repo and the copier template. Many of these tests rely on magic screenshot comparisons, which adds a fairly high level of maintenance burden without some automation, as things like pixel-level font kerning differences can cascade into large changes. The reports are… nice, especially the animations it creates.

I’ve been using (disclaimer: and maintaining) robotframework-jupyterlibary for pretty much all of my stuff, which takes a black-box acceptance testing approach. It recently added JupyterLab 4/Notebook 7 support, but it works with Notebook Classic and older JupytersLab. While it can also use screenshots (and even comparisons with e.g. opencv), I try to avoid treating them as the artifact-of-record… but do try to repurpose the screenshots for e.g. documentation.

One of the few in-the-wild uses of this multi-client approach is jupyter-server-proxy, which relies on a fairly complicated provisioning scheme (or doing whatever the CI runner does) for its webdriver binaries and browsers.

I generally use (disclaimer: and maintain) the conda-forge stack of firefox/geckodriver (because chrom(e|imum|driver) provisoning is… complicated). And I don’t want to propagate any only-works-in-chrome “features.”

An acceptance testing environment...
channels:
  - conda-forge
  - nodefaults
dependencies:
  - robotframework-jupyterlibary
  - firefox ==115.*              # latest LTS, though 102 _still_ works
  - geckodriver                  # will pull a compatible one
  - selenium <4.10               # working on supporting newer stuff 

For truly black-box testing, the above (which is itself not small) isn’t even installed in the same enviromment as the system-under-test, and only knows the location of the jupyter-lab executable. This allows it to also test things like JupyterLite, using mostly the same keywords, and could theoretically test the interplay between multiple installs (each in their own environments) during the same test suite.

1 Like

hey, thanks for the links, I’ll have to dig deeper into them.

But I just found out the problem is even more basic.

When the kernel publishes HTML data to jupyter-server (?? I assume ?), it is both, sent to the front-end (browser) as well as saved in jupyter copy of the .ipynb.

Now if the kernel (on behalf of the users cell code) publishes a Javascript that changes some previous HTML data (the insides of a DIV tag previously created), the java-server is never informed of the change – even though the change happened in the browser.

So the scripted change is only visible in the browser and never saved for instance. Let’s say I have a javascript program that generates an image of a fractal. The image won’t be saved… :frowning:

I’m just making assumptions here. I wonder if there is a way to tell the jupyter-server to replace it’s state of the cell output with the .innerHTML of an element on the browser ? Is there such a an API in Jupyter ?

— Edit —
Or alternatively, a way for the Javascript to communicate back to the kernel – and then the kernel can update the HTLM output through jupyter-server (?) with the contents sent by the Javascript, so it gets saved.

By design, a kernel doesn’t know what a web page is, and barely knows what an .ipynb is (there are a few kernel messages about cell ids). To make those kinds of changes, one would need a custom client extension… or rather, many extensions, due to there being many clients.

Most of these round-trip problems were solved by e.g. jupyter widgets, which established the base pattern for communicating between the two, but brings with them their own challenges.

1 Like

Yes, the design is a bit unfortunate, since the limitation has an impact on what can be done.

I’m not sure what the jupyter widgets do, since it seems to be python specific (or ipython kernel specific) … I’m wondering if there is an API that it’s talking to, that I could take advantage.

I wonder if the Jupyter Server API is accessible by the Javascript/WASM (it’s served in the same port and the web server, right?), and could be use to POST files with the contents. And through these files Javascript/Wasm could communicate to the kernel ? Any ideas ?

design is a bit unfortunate, since the limitation has an impact on what can be done.

Tying the implementation too tightly to HTML (much less a specific client’s HTML) means other things (like jupyter-console, nbconvert, colab, cocalc, vscode…) wouldn’t work.

python specific

Under the hood, it uses the comm messages. Not all kernels support this message, but if they do, and there is a kernel-side implementation of the base widgets, a lot of things will work out of the box. Off the top of my head, IRKerneland IJulia both implement comm, but might not implement the core widget spec.

Jupyter Server API

But it is theoretically possible in a full client/kernel setup, but again, certainly won’t work in nbconvert.

1 Like

Thanks again!

In parallel I had just found the “Low Level Widget Explanation”, which paints an overview of what is going on, and talks about the Comm channel – I’ll try to figure out how to access that (in the Javascript side).

If I manage to make that work (in the Kernel and Javascript), that should be enough to build a generic widgets framework in any language :slight_smile:

On design being unfortunate:

My comment was not about it being tightly coupled with HTML, it just needs to support updates from different ends, or communication between the different ends. Which, now I learned, it does, through this Comm channel. Looking forward to try to get it to work!

1 Like

Just for others reading this thread, I managed to work around by writing a small program to execute the notebook on a headless chromium and then saving the resutls. More details in this other thread

1 Like