Thoughts and Experiences from using Jupyter in Enterprise

Hello Jupyter community. I’d like to share some of my thoughts and experiences from using Jupyter in a strictly-regulated Enterprise environment for a few years now (note: I’m not talking about Enterprise Gateway, I just work in a large corporate space). I’ve talked with some of you before at Jupytercon, the Paris Dashboarding workshop, and on phone calls. @choldgraf @fperez @lheagy I wish I could have made the trip out to the west coast with Dave Stuart and company a few weeks ago to talk in person. If anyone is not a fan of really long posts, then you have my sincere apology – skip to the bottom for TLDR.

There are a myriad of topics around Enterprise Jupyter use that deserve their own threads: compute infrastructure, user education, Notebook discoverability, quality review processes, maintenance tails, etc. If I ever had the chance to give a Jupytercon talk though, it would be about the “spirit” of Jupyter use in large organizations – are we using Jupyter in the right way? What does the right way even mean? Why use Jupyter over another tool?

In order to frame this discussion the right way, I need to share some context about our workplace first. We have around 10,000 business analysts/domain experts who use Notebooks right now, and that will grow. The user base falls along a spectrum of programming comfort that ranges from proficient coders to so-scared-of-seeing-code-they-immediately-close-the-tab. Most users are on the latter part of the spectrum; around 15% of the user base creates and shares Notebooks publicly (well, public-on-internal-network), the majority just run Notebooks that someone else has created. Ideally we’ll see users progress along that spectrum over time as they become familiar with Notebooks: first just running them, then editing a value here or there, then copy/pasting bits from different Notebooks to come up with something new, and finally writing whole Notebooks from scratch.

I mentioned “strictly-regulated”, which I’ll define as a workplace where every analyst uses the same tools but receive different data depending on their access. Dave went into more detail about our regulations in his 2018 Jupytercon talk. Finance and healthcare use-cases are probably identical to ours. Imagine two doctors who use the same API to pull up patient records but can only retrieve information about their patients. A Jupyter-savvy doctor could write a Notebook that does some nifty task but every other doctor would have to run that Notebook on their own to make it work with their patient set.

Our primary development/compute environment consists of per-user isolated virtual machines. We have one-click deployment of a Jupyter container built off the docker-stacks image. That container is integrated with a Notebook Gallery so users can push and pull Notebooks without having to deal with git workflows or email Notebooks around. NBGallery handles Notebook version control and sanitizing/stripping cell output (another regulation requirement).

Workflow and Widgets
With all that out of the way, let’s move on to singing praises about Jupyter. It is wildly popular at our work because it offers many things to many users, there is not just one killer feature. The low barrier of entry is appealing to newcomers and experienced programmers alike. Documenting workflows/tradecraft is extremely important to us, so the combination of markdown and code in addition to amazing nbextensions like table of contents is great. We can also build Notebooks that rival “corporate web tools” when it comes to user interface and data visualization thanks to the ecosystem around ipywidgets, pyviz, and similar efforts.

In fact, Notebooks that are de-facto webapps is one of the most significant trends I see in our workplace. For many Pythonistas in our Enterprise, it is actually faster and easier to develop a web application in a Notebook than to set up a Flask-based site. Some of that has to do with the way our infrastructure and data hosting policy is set up, but there are also really compelling advantages when it comes to debug, introspection, iterating over new features, etc. From the Notebook users perspective, user interactions such as putting in query terms or selecting date ranges feels more natural with ipywidgets than input prompts or editing code cells. Being able to programmatically change the inputs, such as populating a dropdown with user-specific profile information pulled from an API, is also super useful.

For those of you who know me, you know where this is going. A big part of our user base wants widget-focused webapp-style Notebooks for good reasons. However, writing widget-focused Notebooks tends to draw people into a coding style that undermines many benefits of working in the Notebook environment. Using widgets means putting application logic in callback functions. Besides making cells longer and making code often harder to follow, callback functions take a lot of workflow-relevant variables out of the global scope which really makes introspection and debug more difficult.

UI code can also be a distraction from the tradecraft depending on what the Notebook is trying to do. If we’re talking about a Notebook where data visualization/widget integration is the core concept, like displaying an interactive plot with a slider bar, then widget code should be front and center. More often than not in our “production” Notebooks, widgets are just being used to gather user input but are quite confusing for novice programmers to look at.

What would my ideal Notebook look like? The application logic would be linear and synchronous, with small amounts of code in each cell. I would have a table of contents or other extensions that make it easy to navigate around the Notebook and offer an at-a-glance overview of what’s happening. User input would be handled by beautiful widgets. Depending on what the Notebook is doing, it would also be straightforward to parameterize and execute programmatically as a cron, REST endpoint, papermill setup, or something similar.

I have tried to create hacks to bridge the gap between new-user-friendly widget-based callback Notebooks and linear/synchronous workflows but they only address some use-cases. I don’t have a silver bullet to this problem. If you’re interested, check them out -

The Refactor Cycle
I see many Notebook authors go through a broad three-step process in their “production” Notebook development, detailed in the notebook_restified README. First, they explore their problem with a scratchpad style Notebook. Second, they clean up the Notebook by adding a narrative with markdown/comments and share it with other users for review. Third, they take the application logic from the linear/synchronous/introspective/explanatory Notebook and drop it into a callback function, or use it to build a web-app with Flask, or just pass the tradecraft/workflow/code on to another team to rewrite in some corporate analytic (java, mapreduce, etc).

That cycle would be fine if Notebook authors never had to debug user problems, add extra features, or look more deeply into the data they’re working with. When an author does need to dig deep into some “productized” refactor, what I see them do is revert back to the first steps where they’re in a linear/synchronous/minimal-code-per-cell mode. There is a lot of copy/pasting going on.

I think the heart of what I’m trying to get at is that I wish I could separate application logic and user input in Jupyter in a sort of Model-View-Controller pattern. For me, Notebooks distinguish themselves as being the absolute best tool to represent a workflow. When a workflow is packaged up as a pip installable library, or in a callback function, or as server-side web code, then it loses the things that a Notebook gives me like narrative, and introspection, and easy extensibility, and everything else.

Side note here, going back to our diverse user base – bite-sized readable code is crucial to luring code-shy domain experts down the path of learning some programming. The oft-touted benefit of using Jupyter to document and execute tradecraft is only really valid if the code is readable AND the end user can actually understand code.

Follow-on discussion topics?
Making some widget actions synchronous, such as stopping further cell execution until a button is clicked, would mitigate much of the problematic patterns I see. I understand that is a very complicated and nuanced change to make. In past discussions and reading through github issues (particularly @jasongrout comments in ipywidgets issue 1349), I get the impression that the most likely way to make this real is to move widget communications to another comms channel separate from cell execution. Is that an accurate impression?

Another mitigating idea is just changing how we distribute our end products. Maybe instead of a single .ipynb file that we share on NBGallery, we should deploy Binder-style repos which have one widget-based Notebook and one application-logic-narrative Notebook? Moving away from a single file has its own problems not least of which would be redundancy and chances of editing changes in one Notebook and forgetting to do so in the other.

I see a pattern in our Notebook authoring community showing up over and over again where Notebook authors refine their work from a scratchpad to a mature workflow that is introspective, easy to debug, easy to extend, and includes a narrative about their work. Then they take that application logic from that Notebook and repackage it for use with a more robust user input interface like a Flask app or ipywidgets/callbacks. The repackaging ends up taking away from the benefits of the narrative style document (or forces constant refactors).

I hope this post was thought-provoking and resonates with experiences that you’ve seen or heard about in Enterprise Jupyter use. Thank you for your time, and thank you to all the core developers and contributors that built this wonderful ecosystem. I’m looking forward to any conversations that come out of this.


Thanks for this super post!

One “cool new tool” that came to mind reading about your “blocking widgets” ideas is What I like is their simplistic view of “write a script and we will run it top to bottom every time”. There is a lot you can’t do in this model but there is also an awful lot you can. And it is simple. Despite them having widgets I don’t think these are the widgets of Jupyter and I also don’t think they reuse any of the tech :frowning:

Maybe Streamlit is something to look at and import ideas from for your “blocking widgets” needs.


Indeed, great post!

I think the general dissonance between “notebooks” and “reusable code” is a pretty big blocker at scale.

Our little push on this has been importnb, which allows transparent importing of notebooks from other notebooks, as well as regular python scripts (even without IPython around, if you don’t use %magic) as modules or functions. It has some conventions like, “the first markdown cell in a notebook is its module docstring”, and “the markdown cell above a function/class is its docstring”. A lot of languages wouldn’t be able to do this, natively, though: python just happens to be extremely flexible.

This can partially alleviate the challenges of thinking asynchronously in widgets: you write your “script” notebook in (developer) narrative form, and then have a “ui” notebook written in (user) narrative form (perhaps just a single @ipywidgets.interact), which imports the other one and uses it in a functional style, basically restart-and-run-all-ing every time.

Another stab at this is wxyz, which models more things as flows through widgets with You can build up (reusable) pipelines of widgets that connect value to source and get access to their intermediate controls as widgets. It’s unfortunate the link syntax is a bit hard to discover for folks, I’d love it if some other form of widget API existed, e.g. = that maintained nice discoverability, as well as a more obvious way to understand the linkages.

Indeed: there are a great number of asynchronous frameworks out there that provide almost the same functionality, but talk together poorly. Definitely a next level kind of challenge.


@betatim, thanks for the suggestion. After looking through their docs, it does look like streamlit is covering some of the same use-cases and general workflows I had in mind with notebook_restified. Unfortunately I’m having a lot of issues trying to get it up and running to test out any real-world use cases. Version 0.49.0 is giving a NoneType object has no attribute 'strip' and version 0.48.1 doesn’t have the websocket baseUrl support that we need on our infrastructure. I’ll file appropriate tickets, it’s still something for us to circle back to later on.

1 Like

@bollwyvl I agree that the conversation between “importable/executable notebooks” and “reusable code” is a delicate one. I’ve seen a few different implementations of importing or executing Notebooks, thanks for the pointers to yours. I believe that we are pretty much on the same page when it comes to thinking about the asynchronous challenge of widgets.

A decision to put the application logic with the UI code versus hosting it as a separate Notebook versus packaging it as an importable flat file is going to be really subjective and case-by-case. I try to think about that decision primarily in terms of readability and comprehension of the workflow and tradecraft.

If the application logic is really short and concise, it might read better bundled in with the UI code. For me, that gut check is somewhere around, “can I see all of the application logic in one screen”. Once it’s past 50 or so lines, I tend to lose any readability benefits in a Notebook – it might as well be in a flat file and imported.

For complex application logic, I would choose to put something in a Notebook over a flat file if the workflow really was linear and there was a compelling narrative to add. A typical example would be querying an API, cleaning data, and putting together some output. I like that you used the term functional @bollwyvl, I’m going to start using that as a descriptor in these conversations. If I’m writing functions or classes in the application logic, I would say that’s a sign it might be more readable as a flat file than as a Notebook.


You have some very good insight into this, and enterprise data workflows more generally. I think the major downside of IPyWidgets is that it conflates many different concerns:

  • Layout
  • Data binding
  • Code evaluation
  • View

All these are handled by widgets, and each widget is linked up in an ad-hoc fashion (whether via callbacks, link, or interactive). This means that the technical complexity of a dashboard can scale exponentially with the number of widgets! Each additional widget is a change that is harder to pull back from, and at some point it becomes hard even to add an additional slider.

One solution that works right now for Pythonistas is PyViz; PyViz (and it’s constituent components Panel, Parameters, etc) separate out these concerns nicely and let you disentangle application logic from view.

My company, Mavenomics, is working on something called MavenWorks that goes further. We’re in early alpha but our goal is to offer iterative, UI-oriented dashboarding so that all levels of users (including your non-technical, programming-averse users!) can iterate on dashboards quickly.

You can take a look here on GitHub (Mavenomics/MavenWorks), or jump right in with our demos on Binder.

This is still very alpha, but if it interests you we’d love to have you try it out and give us your thoughts!


Thanks for a great post.

I’m also working hard to understand the workflows and frameworks for creating analytics apps and services in Python and in an enterprise setting.

So far that work has lead to and

So far I have good experiences with both Streamlit and Panel.

Streamlit is so fast for users to get up and running because it’s just like running a small script which they already can do. And it looks good. The downside is that it’s still relative limited what they can do.

Panel on the other hand requires some time to learn but is really powerfull. And you can use it both in a notebook and without. The thing I like the most is actually that the reactive api where you can decompose you app into smaller testable, reusable and maintainable components. The downside to Panel as I see it is that it sits somewhere in between Streamlit, Voila and Dash and those have strong communities.

Next thing for me to understand is Voila. I’m not so interested in Dash as I did not like the api and it was slow when I have tried it.

1 Like

One year later. I have settled on HoloViz Panel.

Panel bridges.

  • Can use it in a notebook or an editor (VsCode) or IDE (PyCharm)
  • Can use it with IPywidgets, Bokeh widgets and Panel widgets
  • Panel supports jinja templates.
  • Panel can be used to develop tools that run in Notebook and on web server.
  • Panel can start additional servers that run on the side with fast hot reload

Streamlit on the other side is one template, one big callback, no State, no Way to “push/ stream” and does not integrate or intend to integrate with Jupyter (Support running streamlit in a Jupyter cell · Issue #510 · streamlit/streamlit · GitHub). It’s really good if you have one function you want to make interactive. But has lots of limitations when your app grows.

Panel also has a nice programming flow where you can define your app layout and interactivity in the end of your notebook. Supports “from exploration to production and back”


In this thread still active?

@kafonek your post inspired me to build a Mercury framework for converting notebook into web app. There is no need to rewrite the notebook. Just add YAML header that defines the input widgets. The input widgets values are treated as normal variables in the code. User provides the input from widgets and the whole notebook is executed. There is app gallery built-in so many notebooks can be served from the same server.


Notebooks as packages ~ the pseudo programmatic interface to a package.

I see some parallels between what you described and the way Julia Computing serves their enterprise customerbase. Most of their customers are air gapped or at least operate in their own private cloud on AWS. So what they do is deploy a fully fledged package server (e.g. PyPI) within the customer’s network. Users can write/push packages to it, and pull/ run packages from it; much like the way you described NBGallery.

You also mentioned abstracting “application logic in callback functions” out of the notebook. This is solved easily enough by importing a .py module from within the notebook, but then it becomes a question of how you distribute that folder of scripts/ datasets with the notebook.

# Cell at top of notebook
import .mod1, .mod2, .mod3
# Cell hooked up to a widget
# some widgets

PyPI packages allow you to bundle a lot of extra assets with them. For example, allows me to graft/folder_of_datasets for distribution alongside my module. Packages are also inherently versioned. What if you could just pip install a package, and it came with a Jupyter notebook? That way the logic layer could be easily imported in code cells. You could use the NBGallery extension to browse your company’s Jupyter-enabled packages. Launching a Jupyter-enabled package creates a duplicate of the .ipynb file, defines the entrypoint of the session, and you can start using it.

This is a much more Pythonic flow. It also fits the capabilities of the userbase that you describe, which I have witnessed elsewhere; researchers aren’t computer scientists. I dislike git in products (why not versioned modules?). I dislike YAML in products (why not dictionaries/json?). I dislike CLIs (why not modules?).

I’d be surprised if the WASM ecosystem didn’t sort this out given their emphasis on UIs.

1 Like

From a business perspective, it sounds like you are describing a more self-serve Enterprise Resource Planner (ERP), which is a proven model SAP - Wikipedia that is ready for disruption

1 Like