Autograding inside JupyterLab

Hello,

I am working on a proof of concept that whether any sort of auto-grading is possible within a JupyterLab environment for programming assignments (code-oriented).

Something like: given a text that describes the expectations, students are invited to write their own code, and can then see the outcome on teacher-defined data samples (datasets), compared with the results obtained through a teacher-provided solution (correct data file) with feedback or grade.

Something that I tried is creating a binder environment to ship the data sets to work and the problem statement students can write or upload their local copy of the .py file inside the JupyterLab.