Local continuous test runner with pytest and watchdog.
pytest-watch a zero-config CLI tool that runs pytest, and re-runs it when a file in your project changes. It beeps on failures and can run arbitrary commands on each passing and failing test run.
Whether or not you use the test-driven development method, running tests continuously is far more productive than waiting until you're finished programming to test your code. Additionally, manually running
pytesteach time you want to see if any tests were broken has more wait-time and cognitive overhead than merely listening for a notification. This could be a crucial difference when debugging a complex problem or on a tight deadline.
$ pip install pytest-watch
$ cd myproject $ ptw * Watching /path/to/myproject
Note: It can also be run using its full name
Now develop normally and check the terminal every now and then to see if any tests are broken. Alternatively, pytest-watch can notify you when tests pass or fail:
$ ptw --onpass "say passed" --onfail "say failed"
$ ptw --onpass "growlnotify -m \"All tests passed!\"" \ --onfail "growlnotify -m \"Tests failed\""
> ptw --onfail flash
using Console Flash
You can also run a command before the tests run, e.g. seeding your test database:
$ ptw --beforerun init_db.py
Or after they finish, e.g. deleting a sqlite file. Note that this script receives the exit code of
pytestas an argument.
$ ptw --afterrun cleanup_db.py
You can also use a custom runner script for full
$ ptw --runner "python custom_pytest_runner.py"
Here's an minimal runner script that runs
pytestand prints its exit code:
import sys import pytest
print('pytest exited with code:', pytest.main(sys.argv[1:]))
Need to exclude directories from being observed or collected for tests?
$ ptw --ignore ./deep-directory --ignore ./integration_tests
See the full list of options:
$ ptw --help Usage: ptw [options] [--ignore
...] [...] [-- ...]
Ignore directory from being watched and during collection (multi-allowed). --ext Comma-separated list of file extensions that can trigger a new test run when changed (default: .py). Use --ext=* to allow any file (including .pyc). --config Load configuration from
fileinstead of trying to locate one of the implicit configuration files. -c --clear Clear the screen before each run. -n --nobeep Do not beep on failure. -w --wait Waits for all tests to complete before re-running. Otherwise, tests are interrupted on filesystem events. --beforerun Run arbitrary command before tests are run. --afterrun Run arbitrary command on completion or interruption. The exit code of "pytest" is passed as an argument. --onpass Run arbitrary command on pass. --onfail Run arbitrary command on failure. --onexit Run arbitrary command when exiting pytest-watch. --runner Run a custom command instead of "pytest". --pdb Start the interactive Python debugger on errors. This also enables --wait to prevent pdb interruption. --spool Re-run after a delay (in milliseconds), allowing for more file system events to queue up (default: 200 ms). -p --poll Use polling instead of OS events (useful in VMs). -v --verbose Increase verbosity of the output. -q --quiet Decrease verbosity of the output (precedence over -v). -V --version Print version and exit. -h --help Print help and exit.
CLI options can be added to a
[pytest-watch]section in your pytest.ini file to persist them in your project. For example:
[pytest] addopts = --maxfail=2
[pytest-watch] ignore = ./integration-tests nobeep = True
-f) option (and distributed testing options). This instead re-runs only those tests which have failed until you make them pass. This can be a speed advantage when trying to get all tests passing, but leaves out the discovery of new failures until then. It also drops the colors outputted by pytest, whereas pytest-watch doesn't.
If you want to edit the README, be sure to make your changes to
README.mdand run the following to regenerate the
$ pandoc -t rst -o README.rst README.md
If your PR has been waiting a while, feel free to ping me on Twitter.