Certainly, since that remote URL is public and doesn’t require FTP for retrieval.
Launch JupyterLab by clicking this. (The source of that is the JupyterLab + Binder example repo.)
When the session launches, make a new cell in the notebook that opens by default, and then in that new cell paste the following code:
!curl -OL https://object.cscs.ch/v1/AUTH_c0a333ecf7c045809321ce9d9ecdfdea/hbp_vf_testdata/demo_notebook.ipynb
%pip install brian2
%pip install matplotlib
Run that cell.
After that is done running, you should see the obtained file
demo_notebook.ipynb listed in the file navigation panel on the left.
Double-click that file name and you can now run the cells in the notebook. A few cells are set up to purposefully trigger errors, and so you cannot just
By the way, if the other notebook was in a Github repository, you could use nbgitpuller to construct a link to both launch the environment backing the session and retrieve the notebook in another repo into the launched Jupyter session. You can go here to get a form for generating such links. The environment would already need to have
matplotlib modules installed.
The instructions in the top part will cover you if you wanted to only do this once for your own use.
If you want to do this in a way you could share with others, plus have the other notebooks in the series as alluded to at the bottom of that one to which you linked, you’d need to make your own archive, for example, a Github repo, where you put a
requirements.txt file to install the
matplotlib by listing them, see
requirements.txt example here.
As for handling retrieving the notebooks, you’d make a
postBuild file that handles that step. After you build the image once by launching MyBinder pointed at your repo, it will have the notebooks ‘baked’ into the image that MyBinder will use even though they are not part of the repository you’ll launch. Constructing a working
postBuild can be tricky because I find you cannot just use the Github browser-based editor to do that. In my experience, you need to run git locally and push a commit to the repo so that the permissions of the
postBuild allow executing the commands in it. The JupyterLab example repo I referenced above has an example
postBuild file in it. (If you list all the notebook URLs here, I’d be happy to help set it up and you can fork it from my Github repo and continue to develop from that.)
Alternatively, you could use a
start file to handle the step of getting the notebooks; however, that would run the step at the time of actual start-up of each individual triggered Jupyter session, and so it would add some launch time as those are fetched. This is not ideal concerning making launch time as fast as ppossible. Depending on how big or how many notebooks were being fetched, the additional time may or may not be significant. However, you may still wish to put the retrieval step in the
start configuration file if you are indeed going to be developing the notebook content further with it often changing. This is valid so that you won’t have to remember to trigger a build on the repo that is pulling the remote content everytime you change the notebook content that is sourced remotely. If the retrieval step is in the
start file, it will always pull the latest version. This is a valid compromise that may extend the launch time of the session some while insuring the content is the latest.