Server processes can’t really send stuff directly from zmq to the client without making a lot of assumptions about locality/addressability. But constantly (un)base64-encoding data is indeed a bit of a bear.
If it is of value for python to ever have access to the actual bits (e.g. for working with an ML library, etc) an acceptably-performing option may be to operate at the numpy ndarray level. The widget ecosystem includes ipydatawidgets which uses the best-we-got binary buffer part of the websocket protocol, and supports numpy, xarray, and a few other structures. I haven’t done any image texturing, but can confirm that getting stuff onto this is a massive improvement over JSON/strings for interacting with 3D mesh data with pythreejs, a downstream of ipydatawidgets.
To that end, I’d probably take a first look at
pythreejs.DataTexture (docs, notebook) which has already done most of the heavy lifting of turning floats-in-the-kernel to pixels-on-the-page. And heck, if they liked their sim data in 2d, they’re gonna love it projected onto a rotating sphere.
Another option, if you don’t need “user” code to ever actually have the bits of the picture, is getting on the relatively-newfangled WebRTC bus. The canonical Jupyter example is ipywebrtc for IPython, though there is also a c++ variant for xeus-cling. In this case, your c++ code would join as a “peer” to a “room” to which the browser would be able to connect and watch, and python wouldn’t be any the wiser, even if it set up the room in the first place, and controlled whether what is on the pipe is moving or not. WebRTC has both the well-known video channels, which are “best effort” and can be lossy, as well as data channels, which are lossless. Not sure which one makes the most sense for your use case… or to what extent these are supported in the Jupyter stack!