Sending value data to jupyter image widget

Hi, we have jupyter app that is 99% written as a C++ python package, and is hosted in a jupyter notebook. Basically the core code simulates and renders a physics simulation, and generates a bunch of image data.

Currently, in Python, we have some code essentially like this in a background thread:

while true: 
    im.value = simulator.image_data()

where ‘im’ is an image widget.

There is a lot of slowdown in calling this, and the jupyter app really slows down because it’s stuck sending image data and can’t process client messages.

So, is there a way I can, in my core C++ code, format a ZMQ message with the contents of the jpeg, and send it directly to the client web browser?

Server processes can’t really send stuff directly from zmq to the client without making a lot of assumptions about locality/addressability. But constantly (un)base64-encoding data is indeed a bit of a bear.

Some options:

If it is of value for python to ever have access to the actual bits (e.g. for working with an ML library, etc) an acceptably-performing option may be to operate at the numpy ndarray level. The widget ecosystem includes ipydatawidgets which uses the best-we-got binary buffer part of the websocket protocol, and supports numpy, xarray, and a few other structures. I haven’t done any image texturing, but can confirm that getting stuff onto this is a massive improvement over JSON/strings for interacting with 3D mesh data with pythreejs, a downstream of ipydatawidgets.

To that end, I’d probably take a first look at pythreejs.DataTexture (docs, notebook) which has already done most of the heavy lifting of turning floats-in-the-kernel to pixels-on-the-page. And heck, if they liked their sim data in 2d, they’re gonna love it projected onto a rotating sphere.

Another option, if you don’t need “user” code to ever actually have the bits of the picture, is getting on the relatively-newfangled WebRTC bus. The canonical Jupyter example is ipywebrtc for IPython, though there is also a c++ variant for xeus-cling. In this case, your c++ code would join as a “peer” to a “room” to which the browser would be able to connect and watch, and python wouldn’t be any the wiser, even if it set up the room in the first place, and controlled whether what is on the pipe is moving or not. WebRTC has both the well-known video channels, which are “best effort” and can be lossy, as well as data channels, which are lossless. Not sure which one makes the most sense for your use case… or to what extent these are supported in the Jupyter stack!

Good hunting!

1 Like