“How do I disable downloading files from JupyterHub?” is an extremely common question, especially for folks working with sensitive data. This is impossible to do in the absolute sense - if someone can see data on a screen, they can copy paste it.
However, there are ways to make it harder, and I want to list some of those ideas here. It’s important that these don’t affect users at all when they’re doing ‘regular’ data analysis work - security at the cost of usability forces people to find holes in the system so they can get their damn work done. I think these ideas try to be progressively effective without drastically affecting the user experience.
Hopefully someone can then contribute code / config to make it happen
- Disable the download buttons in notebook & lab. This is very minimal, but extremely helpful. Forces users to use non GUI methods to download things, and that’s already a win.
- Wrap the default ContentsManager so it can deny access to non
.ipynbfiles. ContentsManager is the primary way to get things off your filesystem to your browser. I think only
.ipynbfiles are needed for notebook / lab to work, and disabling all other access makes downloading things harder. To prevent people from just renaming data files to
.ipynb, you can also validate that it is a notebook file before serving it.
- In containers, make sure the user can’t actually modify the
notebookpackage that is used to run Jupyter Notebook server. Python is a dynamic language, and (1) and (2) can be easily subverted if users can just edit the python files containing that logic! So with standard linux permissions, you must lock down the environment where the notebook package is installed. Additionally, you’d also need to lock down config for the notebook, so users can’t just change the config. Blocking access to the paths in
jupyter --pathwould do the trick here.
- Block all outgoing internet access from users, except for specifically allowed targets. This prevents people from just sending out your data to the internet and downloading it from there. Consider using a proxy for outgoing connections here, so you can log as you wish.
- Throttle network connections to the user in such a way that regular usage (ipynb loading, frontend assets, etc) are fine, but larger data downloads are intolerably slow. This could be applied just between the user server and the proxy, since all access to user servers from users go via the proxy. Maybe something that starts throttling once your TCP connection has reached a certain size?
- [Audit] Efficient payload logging at the JupyterHub proxy level, so we can attribute downloads when needed.
If you have users on your JupyterHub, you semi-trust them. For sensitive environments, strong contracts & other non-technical measures are just as important as technical safeguards. Auditing is extremely important in those cases, and not covered in this post at all. However, that only works if you are an organization large enough to go after people who violate your contracts In those situations, making it harder to download data technically is very important.
As far as I know, there are no public & well documented implementations of any of these. I would love:
- A classic notebook extension for (1)
- A JupyterLab extension for (1)
- A wrapper ContentsManager for (2)
- Detailed guidelines for (3), mostly around building containers where this is true. z2jh will also need to be configured correctly for this to work.
- A test suite that ensures that (3) is really true
- Guidelines on how to do (4) in the most common environments. This can probably just live in z2jh.
- A kubernetes sidecar for doing (5). This will exist in the same namespace as the user pod, and could use anything from tc to ebpf. If there exists a pre-existing kubernetes solution for this, documentation on how to deploy that with z2jh would be most helpful.
If you have solutions to these, would love to see them!