A benchmarking and performance analysis framework
A Benchmarking and Performance Analysis Framework
The code base includes three sub-systems. The first is the collection agent,
pbench-agent, responsible for providing commands for running benchmarks across one or more systems, while properly collecting the configuration data for those systems, and specified telemetry or data from various tools (
The second sub-system is the
pbench-server, which is responsible for archiving result tar balls, indexing them, and unpacking them for display.
The third sub-system is the
web-serverJS and CSS files, used to display various graphs and results, and any other content generated by the
pbench-agentduring benchmark and tool post-processing steps.
The pbench Dashboard code lives in its own repository.
Instructions on installing
pbench-agent, can be found in the Pbench Agent Getting Started Guide.
For Fedora, CentOS, and RHEL users, we have made available COPR builds for the
pbench-web-server, and some benchmark and tool packages.
pbench-web-serverpackage on the machine from where you want to run the
pbench-agentworkloads, allowing you to view the graphs before sending the results to a server, or even if there is no server configured to send results.
You might want to consider browsing through the rest of the documentation.
Refer to the Pbench Agent Getting Started Guide.
TL;DR? See "TL;DR - How to set up theand run a benchmark " in the main documentation for a super quick set of introductory steps.
The latest source code is at https://github.com/distributed-system-analysis/pbench.
The pbench dashboard code is maintained separately at https://github.com/distributed-system-analysis/pbench-dashboard.
Yes, we use Google Groups
Below are some simple steps for setting up a development environment for working with the Pbench code base. For more detailed instructions on the workflow and process of contributing code to Pbench, refer to the Guidelines for Contributing.
$ git clone https://github.com/distributed-system-analysis/pbench $ cd pbench
To simply run the unit tests quickly from within the checked out source tree, execute:
jenkins/run jenkins/tox -r --current-env -e jenkins-pytests
jenkins/run jenkins/tox -r --current-env -e jenkins-unittests
The above commands run the tests in a Fedora-base container with all the proper packages installed.
If you want to run the unit tests outside of that environment, you need to install
toxproperly in your environment (Fedora/CentOS/RHEL):
$ sudo dnf install -y perl-JSON python3-pip python3-tox
Once tox is installed you can run the unit tests (use
tox --listenvsto see the full list); e.g.:
tox -e util-scripts-- for agent/util-scripts tests
tox -e server-- for server tests
tox -e lint-- to run the linting and code style checks
To run the full suite of unit tests in parallel, invoke the
run-unittestsscript at the top-level of the pbench repository.
This project uses the flake8==3.8.3 method of code style enforcement, linting, and checking.
This project makes use of pre-commit to do automatic lint and style checking on every commit containing Python files.
To install the pre-commit hook, run the executable from your Python 3 framework while in your current pbench git checkout:
$ cd ~/pbench $ pip3 install pre-commit $ pre-commit install --install-hooks
Once installed, all commits will run the test hooks. If your changes fail any of the tests, the commit will be rejected.
We employ a simple major, minor, release, build (optional) scheme for tagging starting with the
v..[-]). Prior to the v0.70.0 release, the scheme used was mostly
v., where we only had minor releases (
Major = 0).
The practice of using
-serveris also ending with the
This same GitHub "tag" scheme is used with tags applied to container images we build, with the following exceptions for tag names:
latest- always points to the "latest" container image pushed to a repository
v-latest- always points to the "latest"
v.-latest- always points to the "latest" release for