OpenCv with JupyterLite/Voici?

Hi,

I’m working with Jupyterlite on Github Pages : the Voici version.

For the moment, it is possible to put image on the screen of the application with PIL (pillow) or with Ipywidgets Image : Both work :

Test_load_image?

But I would like also to use OpenCv : is it possible ?

I modified the build-environment.yml like this :

…

- pip:

- voici>=0.3.2,<0.4.0

- jupyterlite-xeus-python>=0.8.0,<0.9.0

- numpy==1.23.1

- opencv-contrib-python==4.6.0.66

… And so, the build and the deployment were fine.

But when i write : import cv2 : in my jupyter notebook, I have the classic message : ModuleNotFoundError: No module named ‘cv2’

Is it possible to use OpenCv or is it forbidden ?

If it is possible, how can we configure it ?

Thank for your help

If using jupyterlite-pyodide-kernel, this seems to work:

%pip install opencv-python
import cv2

I’m not sure what would be different with xeus-python-kernel, but it’s at least theoretically possible in lite, given sufficiently motivated upstream packaging.

Of course, not everything is going to work the same way. Here’s the first detection tutorial, with some tweaks: lite doesn’t know what a window is, so using PIL to show stuff:

2 Likes

Thank You a lot Bollwyvl,

You are right, OpenCv work in Jupyterlite :

I did this work :
https://dfialaire.github.io/Formation-NSI-_-Presentation-Jupyterlite/lab/index.html
… where I open and transform an image with OpenCv in Jupyterlite…

But when I tried videocapture from OpenCv, it didn’t worked… whereas CameraStream from Ipywebrtc worked nearly perfectly…

I have a script wich work localy on my computer to construct a Harry potter invisibility ( https://biotech-online.pagesperso-orange.fr/Cape/Cape_invisibilit%C3%A9_HTML.html ) … it needed to get and transfrom a frame from a videocapture at every frame of the stream… but it seems complex to do it from a server…

Do you have an idea why videocapture from opencv doesn’t work ?
Thanks.

videocapture from OpenCv

Much like with the window: what’s a camera? The whole “computer” is just a pile of JS, and a whole lot of things are missing: devices, threads, processes. Perhaps PIL or something could be used to convert the CameraStream frame by frame.