I think for many folks (at least in my experience), Notebook vs Lab is quite an odd situation to encounter within the scientific computing space which is why there is so much debate and interest to find ways to close the gap. When I look back on my own experience over the years with teaching folks scientific computing and tools (like RStudio, SPSS, Matlab, etc), you never really had a situation where a major upgrade/expansion of the toolchain led to a split in the community (except perhaps Python 2 vs 3).
One of the core causes I think is that many of these tools have a natural growth path from intro/non-technical user to power-user and this is where JupyterLab can feel awkward. The story here is improving over time but there is quite a bit of friction at the moment.
I’ll use the example of RStudio. Often when R is taught as part of an intro stats or analytics course, most students will only be taught to use the text editor and console and the rest of the features will be ignored. And for most people this is great. You double click to open RStudio, you select a chunk of text, use a shortcut to send to console, and suddenly you have results and plots. You could use RStudio in this way year after year and never need to use (or even visually see) any of the advanced features (e.g., RMarkdown, RNotebook, version control, package mgmt, etc) unless you started working on projects that required them. In that case, RStudio would “grow” with you as your work complexity increased but still served the simple user as expected. (Note: I am describing my experience with developers and technical non-dev users like scientists or researchers)
If you were someone who has been using classic notebooks then you have to deal with a much more dramatic change and I find this is at the heart of the issue vs the community or tool having different use-cases. If you are a classic notebook user and you upgrade to Lab then all of your extensions break, the behavior of the notebook is not exactly the same (perhaps this is more aligned these days), and in practice it is more difficult to manage it (e.g., people are often bewildered by JupyterLab’s build process because they have never encountered one before).
Instead of being perceived as simply an upgrade (e.g., RStudio version x to x+1), it can feel like you are using a different tool and without a ton of clarity why (or even a worse one depending on the extensions you relied on). Early on one of the main blockers I encountered was users not understanding why their nb_extensions were not working when they started JupyterLab or interpreting the presence of a build process as “this tool is in alpha/beta version.”
On the development/IDE side of things this “growing with you” problem comes in at the higher-complexity work. For a power-user, JupyterLab provides great tools to do really advanced modeling and analysis with tons of flexibility. You end up hitting a wall when your projects reach a certain size and you need more advanced functionality like multi-file/multi-line editing capabilities (particularly of text files). The reason this is an issue is that most of the standard IDEs (PyCharm, VSCode, etc) have pretty poor support for data science workflows due (but not limited) to things like not supporting all the packages you might need (e.g., Altair in PyCharm), extremely clunky visualization tools (e.g., VSCode’s plot viewer/interactive window), poor non-static intellisense (e.g., pandas method or dataframe column completion), or just having buggy notebook support in general. Jeremy Howard has many good comments/perspective on this topic.
The result is that you end up with this situation where you need to use 2 tools simultaneously: JupyterLab for the iterative parts of the workflow and an IDE for the package development/advanced editing functionality. This is fraught with its own issues in that now people need to maintain and learn 2 separate tools and often leads to more mistakes due to things like having to manually copy and paste frequently. Going back to the RStudio example (though also applies to things like Matlab, SAS, etc), you generally see everyone using a single tool from intro to advanced user unless you are really doing something far outside of a data science / analytical task.
And one thing I would also point out is that these are not developers in the traditional sense but data scientists or statistician/academic types that know how to code for analytical purposes and their projects often have different needs than what you see in a traditional software role (though there are areas of overlap). So when people talk about things like “notebooks suck for developers” it is not because someone is trying to build something like Instagram from a notebook but that building repeatable analyses/models can be quite difficult at the moment due to the lack of standard tooling/IDE-support around these things.
I think many people see JupyterLab as being able to overcome these frictions since it has all of the right ingredients to enable you to be successful whether you are just starting and need to run simple notebooks/scripts or are doing big modeling projects on a cluster. Much of this can be fixed with UX changes and as the extension ecosystem continues to mature the friction around the things I describe above is improving pretty quickly each year.
Note: This is particularly focused on the Python community (since Julia and R users have their own IDEs so they would have a different perspective)