I’m working with Jupyterlite on Github Pages : the Voici version.
For the moment, it is possible to put image on the screen of the application with PIL (pillow) or with Ipywidgets Image : Both work :
But I would like also to use OpenCv : is it possible ?
I modified the build-environment.yml like this :
… And so, the build and the deployment were fine.
But when i write : import cv2 : in my jupyter notebook, I have the classic message : ModuleNotFoundError: No module named ‘cv2’
Is it possible to use OpenCv or is it forbidden ?
If it is possible, how can we configure it ?
Thank for your help
jupyterlite-pyodide-kernel, this seems to work:
%pip install opencv-python
I’m not sure what would be different with
xeus-python-kernel, but it’s at least theoretically possible in lite, given sufficiently motivated upstream packaging.
Of course, not everything is going to work the same way. Here’s the first detection tutorial, with some tweaks: lite doesn’t know what a window is, so using
PIL to show stuff:
Thank You a lot Bollwyvl,
You are right, OpenCv work in Jupyterlite :
I did this work :
… where I open and transform an image with OpenCv in Jupyterlite…
But when I tried videocapture from OpenCv, it didn’t worked… whereas CameraStream from Ipywebrtc worked nearly perfectly…
I have a script wich work localy on my computer to construct a Harry potter invisibility ( https://biotech-online.pagesperso-orange.fr/Cape/Cape_invisibilit%C3%A9_HTML.html ) … it needed to get and transfrom a frame from a videocapture at every frame of the stream… but it seems complex to do it from a server…
Do you have an idea why videocapture from opencv doesn’t work ?
videocapture from OpenCv
Much like with the window: what’s a camera? The whole “computer” is just a pile of JS, and a whole lot of things are missing: devices, threads, processes. Perhaps
PIL or something could be used to convert the
CameraStream frame by frame.