This is a good question, but fraught with peril.
In JupyterLab 2, we (ab)used the jupyterlab-extension
keyword in npm metadata, which was at least searchable on that site, and by API. This is probably still viable, but not all extensions there work for JupyterLab 3, and not all extension authors are still publishing on npm.
In the brave new era of PyPI-distributed extensions, there have been some discussions of a family of trove classifiers that would help make these packages more discoverable, but to make them “real,” we’d kinda want to get it right the first time, and the discussion has stalled a bit.
This would allow a relatively straightforward discovery mechanism through the PyPI web UI, once it was rolled out, and extension authors started using it, but would suffer from the same problems as npm, as there is no curation on npm
or pypi
, really just… lots of packages we hope aren’t malicious.
Building something more involved/bespoke is of course possible, but nobody has stepped up to build, then curate/maintain it. Presumably, such a beast could be relatively automated, where a given extension author could add a canonically-named file (e.g. the package name on PyPI) in a repository somewhere, and a nightly job would rebuild a static site that:
- grabbed the wheel
- found the metadata of all the
{sys.prefix}/share/jupyter/labextensions
- built a page for it
Ideally there would be some kind of test required, ideally in plain language, which would yield a predictable screenshot or two.
I wouldn’t foresee anything more dynamic anytime soon, first-party, as folk are pretty heads-down just trying to keep everything running.