Thoughts on using tox?

Hi all,

I’m new to contributing to JupyterHub and one of the first things I’ve run into is the various contributor guides (both within the jupyterhub repo itself and more generally for jupyter the community). Some of those have a decent start to getting a local development environment up and running but they also have some gaps, e.g. I didn’t have a native sqlite package installed so running tests and starting jupyterhub was not working for awhile without any real clear errors.

A common pattern I’ve noticed in these contributor / getting setup for local dev docs is they tell you to install things essentially globally but that’s something I’d prefer not to do and actually I couldn’t get working for the tests to run with pytest until I created a virtual environment (pip install was seg faulting outside the venv).

I opened an RFE style issue to try and capture some of this [1]. I didn’t see any existing threads or docs about isolating the dev environment so figured I’d started a thread here.

What I’m proposing is add a tox.ini configuration file for running tests, starting jupyterhub from within a venv, building docs, etc. Then the contributor docs become pretty simple: pip install tox virtualenv && tox

Furthermore, the travis CI config and tox config could be synchronized so that the travis config just runs tox commands so you get the same interface for local testing and what is run in travis (I had to look at how travis was configured to run some tests while trying to get the tests to run).

Anyway, I’m assuming this has been brought up before but since I’m new and couldn’t find any existing threads about it in discourse I figured I’d ask.

[1] https://github.com/jupyterhub/jupyterhub/issues/2961

Thanks,

Matt

I can see the benefits of using tox for CI builds and tests. However I’ve had problems when using it for development/testing. It seems to be quite difficult to use it for “interactive” development where you e.g. make one small change to a file and just want to run a single test. Tox often attempts to reinstall all the dependencies (takes time) or runs the entire test suite (frustrating when the single test you want to run takes less than a second).

Do you know how to avoid this?

I think with pip/wheel caching in the virtual environment the re-install (if it even does) of the dependencies shouldn’t be a problem.

As for doing something like passing through args to pytest yes that’s pretty simple so you can do something like this:

tox -e py36 -- pytest -vv --maxfail=1 jupyterhub/tests/test_api.py

Everything after the -- is passed through but you have to setup the tox configuration that way (but that is trivial).

I have mixed feelings about the automation of the “dev setup” steps. When we’ve discussed the topic before (for other repos managed by essentially the same team) the take away was that we have “five developers with seven different ways to setup things”.

The point being that people want to have explicit control over the install/setup steps because they have a “bespoke” setup on their machine or machines.

For example the docs might mention virtualenv or python -m pip install as the way to install things but in reality people use conda envs and conda install. So the instructions aim to be a “lingua franca” for how to setup things that someone with no opinions can copy&paste to get a working setup. As well as allowing experienced/old/stuck in their ways people to read them and know what is meant by them (and then translate into their personal preferred way).

Dev setup tends to be a one time cost, not a recurring one for a given person. However the instructions we have should have a very high success rate for new devs to succeed in getting things setup. This means making small tweaks to what we have to improve things gets my support.

Overall I like the explicitness and that I have “direct” access to things (running just one test, setting up the env “my way”).

Incremental-change’ly-Tim :wink:

https://tox.readthedocs.io/en/latest/example/basic.html#depending-on-requirements-txt-or-defining-constraints

(experimental) If you have a requirements.txt file or a constraints.txt file you can add it to your deps variable like this:

This potentially allows tox to co-exist with the virtualenv instructions as far as managing dependencies is concerned. However looking through the JupyterHub Travis config there’s a lot more setup which involves interacting with the global database and/or docker and probably shouldn’t be forced on everyone. How much of this would you want in tox?

I get that people want to be able to do things their way (which I referenced in the issue with conda and docker) and they still can. I don’t necessarily agree that it’s a one time setup though. That’s certainly not the case with travis CI which is why it’s automated in the travis config file, so it can be reliably reproduced. My main goal is to automate this setup and avoid having to install jupyterhub’s dependencies globally in my dev VM on which I have other projects running in isolation. There are multiple ways to do that, tox being what I consider an easy one when the instructions are already telling people to use pip.

Is there a concern that if tox config is added to jupyterhub that there would be a push for duplication with conda, docker, random bash script, etc?

I would start with minimal setup, so ignore mysql and postgresql and docker. Assume the sqlite dependencies are installed (note that sqlite is not called out in the getting started / contributing instructions in the docs as the default DB, it is elsewhere though [1]). So that means setting up dependencies (npm install, pip install) and then the actual test command is the same as the script in travis config, running pytest. Then you would have a different tox environment for the jobs defined in travis. A minimal starting set would be py35/36/37/38 which just run pytest and then a lint target that runs what is the autoformatting check job in the travis config. Since I’m on Ubuntu 18.04 with py36 I would run that along with the linters: tox -e py36,lint

I would also automate the docs builds in a tox env which looks like it’s not done in travis CI today so updating and building the docs would be as easy as tox -e docs.

That gets you to the point where you can start replacing parts of the travis config and what’s left is the environment setup for those jobs that run with different databases.

Anyway, that’s how I’d approach this. Maybe it would be best if I put together a WIP pull request to get a feel for what this looks like in actual code unless it’s a clear non-starter from the core team? Worst case is I have the tox config in my forked repo that I maintain separately.

[1] https://jupyterhub.readthedocs.io/en/stable/reference/database.html

1 Like

Thanks for the additional info! If you’re completely fine with a PR not being merged then there’s no harm in opening one.

My (personal) view is if you can add tox whilst keeping the flexibility mentioned by @betatim and without adding to the maintenance burden I think tox is fine to use in CI- after all the Travis CI setup has to be done somehow, so it doesn’t matter too much whether it’s tox or some other custom script that has to be maintained.

Thanks for the replies and thoughts!

I see it exactly the other way around: travis is a robot, it doesn’t care if the setup instructions are 10, 20, 50 or 100 lines. It just executes them all. The hard part about our CI is that it sometimes (rarely) breaks or stops doing what it is meant to be doing. This is when someone has to go and understand what it should be doing, what it is doing and why the two are different. Usually breakage happens because something external to the repository under test changed. Someone released a new version of some tool, a service got deprecated, etc Every layer of abstraction or automation that we add makes this “once every few months” debugging of a complex testing setup on a remote machine I can’t even ssh to more intimidating.

It is the combination of rare breakage, external factors, a solid&debugged travis.yml and complex test scenarios that make me very conservative when it comes to more automation or changing the CI setup “for fun”. I’d much rather see those hours used to develop new features or used next time when the CI is broken and we can’t merge PRs because no one has time/spare brain power to debug what has changed somewhere on the internet that causes our CI to break.

I use conda environments to achieve this isolation on my macbook and you use tox (I think) and a third person uses vagrant to manage separate VMs for separate projects :slight_smile:

This diversity of setups is why I think having the instructions for humans in the docs be “lowest common denominator” instead of picking one tool and giving instructions for that one. The unsuspecting newcomer (to Python development) can copy&paste the commands to get something up and running. The grizzled veteran developers will all be equally annoyed that the instructions don’t use “the correct tool” for automating this but they can read the commands and translate them to their way of doing things. With explicit commands you have to translate them but with automation you’d first have to learn enough about the tool we use for automation to then translate it to your way of doing things.

As you noticed we already struggle to keep things updated, coherent and nice with the simple instructions we have. For them to be friendly to newcomers it is probably more about how discoverable they are and the words around the commands than the commands themselves. No amount of automation will help us with the words for humans.

As an example: in Zero2JupyterHub we recently switched to having two tracks for installing things: automated and manual. One thing that has happened is that the automated setup is very hard for newcomers to the project to adapt and the number of regular contributors who understand it well enough to help is also small. The contributing guide is hard to read because it switches between the two modes of installing things. While I don’t know if the switch was a success or not, I think right now the majority feels like “it is different and also has problems, but hey they are different from the ones we had before”.

So overall my feeling is that unless there is a very concrete problem that is painful for several people making changes to the CI setup or dev setup instructions from one type of setup to another different type is unlikely to reduce the total pain, it just moves it. My money is on small incremental changes to the existing setup/instructions to make it more robust, easier to follow, well understood by more people, etc as the way to reduce the total pain (and time spent maintaining this support infrastructure).

1 Like

Ack, I understand and thanks for the thoughtful reply. I was a maintainer for several years on the biggest / most active openstack project there is (nova) and totally understand the feeling of “we have bigger fish to fry”. As noted earlier, if I get a wild hair to PoC the tox thing in my own fork I can post it back here for others that are interested but won’t pursue it into the main repo. Instead I’ll see if I can push some changes to the contributor docs to hopefully fill some of the gaps I recently ran into when trying to get tests working locally (probably the biggest is just that I didn’t have sqlite installed).

2 Likes

I want to reply with just :heart: but discourse makes me write a whole message :joy:.

Perhaps late to the discussion. For new contributors, tox can be helpful for building docs locally, running tests locally, and linting locally. I’m not a fan of using it for CI. It does however lower the barrier for new contributors. It should be able to co-exist with the existing workflow.