is it possible to speed up the build time during extension development?
For example, following the astronomy picture tutorial, every “jlpm run build” takes about 5-10 seconds which is perfectly fine. When I run jupyter in watch mode (“jupyter lab --watch”), my system crashes while jupyter lab rebuilds. Instead of the watch mode I run “jupyter lab build” after each “jlpm run build”. This takes 1-2 minutes and leads to high coffee consumption and very low work efficiency
Is it possible to run jupyterlab in some kind of dev mode, so that “jlpm run build” and a juypter lab restart suffices? After all, what is the purpose of “jupyter lab build” after the extension has been rebuilt with “jlpm run build”?
Can you provide more details on your jupyterlab/node versions (and how they were installed) and the actual error you get during
--watch? We have taken a number of steps to build more performant on 1.0, but it’s still a hulking brute of a build.
cd into the
$PREFIX/share/jupyter/lab/staging directory and run
jlpm build --watch, do you get the same error?
It is of course unfortunate that two build steps are needed (one to build your package, through whatever means you use as commonjs/esnext, and one to build the actual lab assets to ship to the browsers) but otherwise every end user would be stuck with all of your dev dependencies, too, and we have plenty as it is with “vanilla” js builds.
Of course raising an issue on @jupyterlab/jupyterlab might get more/better on this…
bollwyvl, thank you.
I don’t get an error, the machine simply restarts. This happened on two different machines with only 4GB/6GB of RAM, but I didn’t have the problem on a 8GB machine.
Versions are: Jupyterlab 1.0.2, node 11.14.0
I do understand that a rebuild is necessary to ship an optimized Jupyterlab including extensions to endusers.
But doesn’t it make sense to ship an unoptimized dev version to the developer itself, saving minutes of build time after each tiny extension update?
bollwyvl, is it possible to make use of many cores to run the “jupyter lab build” step in parallel for speed up?
Yep, you already are seeing the unoptimized version, which is $x$ times larger than the production build that we ship in a JupyterLab python release.
On the jquery notebook, we just pull bower dependencies into a flat tree, and used requirejs to stitch them together. This meant you basically couldn’t have two versions of the same package in your notebook UI, which eventually made it harder to support really extensive customization and integration.
In Lab, we tried writing our own plugin distribution system, but decided the cost/benefit of introducing a new package archive format were too high. We might get back to a truly modularly-deployed application with es modules at some point, but for now, Lab is stuck with the npm stack and commonjs/esnext which is supported by all the folks to be able to stitch together all of the pieces needed.
No, not really. On a clean build, there isn’t a ton more to be done. Incremental builds could be faster, basically by doing a better job of splitting and bookkeeping, basically by keeping the live state of the build around on disk. But, today, if
--watch can’t be made to work for you, you’re stuck.