I developed an handful of jupyter notebooks to solve a set of small problems. All have the same workflow: upload data, compute something, show a nice report with plots.
I want to share these “tools” with non-technical people so I’d like to streamline a bit the interface (no editing, a nice “start button”, ecc).
What is the preferred way to achieve that? Do you have any advice?
The second two issues can be solved with appmode and ipywidgets. Or possibly even Jupyter book.
For example of features for Jupyter Book start here.
The documentation on the underlying resources
I don’t yet know a good way to handle the first part because presumably you want them to be able to add their own data. Maybe you could have related example data already be there and have a link documenting how to upload and get back to appmode? Or if this is stuff that they can put into a cell and then you can trigger processing via Jupyter Book?
I think there are widgets/libraries that let you have a “Upload a file” button as part of your notebook. It has been a long time since I used any of them and neither really knocked my socks of when I did :-/
There is also a discussion about a FileUpload widget here: https://github.com/jupyter-widgets/ipywidgets/issues/1542
ipyupload package which is already pip installable: https://gitlab.com/oscar6echo/ipyupload
Thank you for you answers, I was not aware of those projects!
The data upload seems to be the main issue. Maybe I am trying to achieve something not really suited for jupyter? In the end I am giving up the Read-Eval-Print-Loop pattern. What if I am willing to rewrite my tools outside jupyter? Is there a way to keep using the same libraries (numpy, holoviews, bokeh, ecc) to create a usable application for non technical users?
There are definitely options outside of Jupyter if you are willing to give up the Read-Eval-Print-Loop pattern, as it sounds like you are.
You could just write a python script that people put alongside their data. You can generate a report other ways and that generation can be part of the script. The main way I have used in the past is via ReportLab because it can do everything and make composited PDFs. But you can go simpler by using Matplotlib and annotation or PIL to control placing the parts of your report.
Or if you want you can make the output go to Plotly I believe. Where the ‘report’ could be a webpage with the resulting plots if that works. I have old example code with composting with ReportLab here.
Although it doesn’t have the report making aspect, the first part of this notebook I have here sort of illustrates putting data alongside a script with sample data and then suggest user to add their data. The goal is just to really get users pointing the script
plot_expression_across_chromosomes.py at their data and then they get a plot. The overarching script is stored in a different repo and obtained in the preparation steps. (However, that particular script I haven’t overhauled to my current, clearer coding style. So if you need a model, I’d suggest the one named ‘nucleotide_difference_imbalance_plot_stylized_like_Figure_8_of_Morrill_et_al_2016.py’ available here; the blurb about it at that page tells you how you can run the demo page via MyBinder.)
That type of combination can also be dockerized so they would just need to place their data in a directory that you then tell them how to attach to a Docker container and then trigger the processing with a single command. This example here builds to something like that while teaching about Docker. I say ‘something like that’ because the software triggered isn’t a Python script as far as I know but the idea is the same.
As for giving people a place to put their data with your code, you can go even fancier and make an actual Python-based web app using Django or Flask. Because it is Python, you can use thePython modules that you have mentioned. I have done this on PythonAnywhere with some really simple items before I discovered Jupyter and the MyBinder combination, for examples see http://fomightez.pythonanywhere.com/ammonium_screen/ or http://fomightez.pythonanywhere.com/spartan_fixer/ . However, that adds overhead of learning another ecosystem and adds hosting fees if you are allowing people to process a lot of data. Jupyter allows much more complex interaction between users and your code though without focusing on the overhead of web development. In fact I never quite got around to implementing the upload part and was just working with web forms at the time, as you’ll see if you look at my examples being served from PythonAnywhere.
Does app mode still require containerization for security? Put another way, does app mode allow for arbitrary code execution?
We just had a moment to chat with the QuantStack team about a project they’re working on called Voila (https://github.com/quantstack/voila). It sounds like it’s a good solution for this use-case! @SylvainCorlay could explain more but this is definitely worth checking out.
Another option that just emerged: https://github.com/bentoml/BentoML
appmode is a notebook extension that changes what the UI looks like. This means there is still a full blown server (and kernel) connected to the notebook, which means users can continue to run their own code (if they can figure out how to switch to “edit mode” in appmode)
Something like voila (mentioned by Chris) is better in this case as it actually disables/doesn’t implement some parts of the kernel communications protocol which means that users can’t send code to the kernel. Quick demo: https://mybinder.org/v2/gh/QuantStack/voila/stable?urlpath=voila/tree/notebooks
Voila looks great. Thanks!
Hello, I am a newbie in data science and webapps, and i also, search for a way to transform a jupyter notebook into a webapp. I found appmode very interesting and easy but i really can not understand if with appmode i could have a real webapp or appmode runs only in local mode? Could you please help me understand what is the choice for me?
Via the Binder system you can share your appmode-creation publicly. You would need a repository that will work but that is the only limitation. The demo you get from pressing
launch binder https://github.com/binder-examples/appmode is available via that service. So imagine using that as a template then replacing the pertinent code at https://github.com/binder-examples/appmode with your content and making sure links point at right place and you’d have a public app.
I’ll add that on Github, that can all be done in the browser interface of Github with no need to use git on local computer or ever clone to local. The
postBuild file in that repository is only concerned with fetching data for the ipyvolume demo, and so it doesn’t need to be included/edited. Because
postBuild is an executable file type, I’ve found you cannot edit it via the browser interface of Github or permissions get messed up in the resulting image build. If you did need to edit or include a
postBuild file, you wouldn’t be able to only alter the repo contents via the Github web browser interface.
Thanks, @yonatan. Unfortunately, I note that discourse doesn’t allow me to update that post anymore.
I will add, because I saw earlier in this thread that I had expressed not having an example to point to for having worked out file uploads beyond the one example in the Binder Gallery of rendering an STL file, that now I did eventually figure out coding in general file uploading and subsequent use of that file for Voila. (Well at least single files.) Folks looking for more on that can see the links here.
I’ve created an open-source framework called Mercury for easy sharing of Jupyter Notebooks.
With Mercury you can convert your notebook to web app by adding a YAML config in the first RAW cell. What is more, you can share many notebooks in it, because it has an app gallery built-in. You can also decide to show or hide your code.
Below screenshots from Mercury:
Mercury is open-source with code at GitHub - mljar/mercury: Mercury: easily convert Python notebook to web app and share with others
If you have more questions or need help, please let me know!
Mercury is open-source
I think every time you rep mercury as open source, it’s probably wise to point out
Mercury is either open source under the AGPL, or available as a commercial license on request
I am neither a lawyer, nor anti-AGPL, but would like to point out the rest of the stack it sits upon (django, jupyter) are downstream-friendly BSD-3-Clause for the express purpose that others can build new stuff on it without having to worry about their work being bound. AGPL is very sticky, and would almost certainly (again, IANAL) cover user content, which is not going to be very attractive to anybody to use in production, even for non-commerical interests.
Thank you @bollwyvl. You are right, Mercury is dual licensed. It has AGPLv3 license and commercial license. I decided to go this path, to provide long support for the project.
Nice use of nbconvert. Will it render jupyter widgets like voila does?
Taking a quick glance at mercury/tasks.py at 70713b3ea37dcb31443b98e35c51c6e7d2f941c2 · mljar/mercury · GitHub did you consider using
jupyter kernelspec list --json? It could save you some parsing
@krassowski thank you! It cant handle ipywidgets right now. It just collect the input and execute the notebook. The process of notebook transformation is simple as adding YAML config.
In the future I would like to support notebook scheduling and notebook as REST API conversion.
Thank you for a tip!