Need help with pbench?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

distributed-system-analysis
137 Stars 86 Forks GNU General Public License v3.0 1.8K Commits 389 Opened issues

Description

A benchmarking and performance analysis framework

Services available

!
?

Need anything else?

Contributors list

Pbench

A Benchmarking and Performance Analysis Framework

The code base includes three sub-systems. The first is the collection agent,

pbench-agent
, responsible for providing commands for running benchmarks across one or more systems, while properly collecting the configuration data for those systems, and specified telemetry or data from various tools (
sar
,
vmstat
,
perf
, etc.).

The second sub-system is the

pbench-server
, which is responsible for archiving result tar balls, indexing them, and unpacking them for display.

The third sub-system is the

web-server
JS and CSS files, used to display various graphs and results, and any other content generated by the
pbench-agent
during benchmark and tool post-processing steps.

The pbench Dashboard code lives in its own repository.

How is it installed?

Instructions on installing

pbench-agent
, can be found in the Pbench Agent Getting Started Guide.

For Fedora, CentOS, and RHEL users, we have made available COPR builds for the

pbench-agent
,
pbench-server
,
pbench-web-server
, and some benchmark and tool packages.

Install the

pbench-web-server
package on the machine from where you want to run the
pbench-agent
workloads, allowing you to view the graphs before sending the results to a server, or even if there is no server configured to send results.

You might want to consider browsing through the rest of the documentation.

How do I use pbench?

Refer to the Pbench Agent Getting Started Guide.

TL;DR? See "TL;DR - How to set up the

pbench-agent
and run a benchmark " in the main documentation for a super quick set of introductory steps.

Where is the source kept?

The latest source code is at https://github.com/distributed-system-analysis/pbench.

The pbench dashboard code is maintained separately at https://github.com/distributed-system-analysis/pbench-dashboard.

Is there a mailing list for discussions?

Yes, we use Google Groups

Is there a place to track current and future work items?

Yes, we are using GitHub Projects. Please find projects covering the Agent, Server, and a project that is named the same as the current milestone.

How can I contribute?

Below are some simple steps for setting up a development environment for working with the Pbench code base. For more detailed instructions on the workflow and process of contributing code to Pbench, refer to the Guidelines for Contributing.

Getting the Code

$ git clone https://github.com/distributed-system-analysis/pbench
$ cd pbench

Running the Unit Tests

To simply run the unit tests quickly from within the checked out source tree, execute:

  • jenkins/run jenkins/tox -r --current-env -e jenkins-pytests
  • jenkins/run jenkins/tox -r --current-env -e jenkins-unittests

The above commands run the tests in a Fedora-base container with all the proper packages installed.

If you want to run the unit tests outside of that environment, you need to install

tox
properly in your environment (Fedora/CentOS/RHEL):
$ sudo dnf install -y perl-JSON python3-pip python3-tox

Once tox is installed you can run the unit tests (use

tox --listenvs
to see the full list); e.g.:
  • tox -e util-scripts
    -- for agent/util-scripts tests
  • tox -e server
    -- for server tests
  • tox -e lint
    -- to run the linting and code style checks

To run the full suite of unit tests in parallel, invoke the

run-unittests
script at the top-level of the pbench repository.

Python formatting

This project uses the flake8==3.8.3 method of code style enforcement, linting, and checking.

All python code contributed to pbench must match the style requirements. These requirements are enforced by the pre-commit hook using the black==1.19b0 Python code formatter.

Use pre-commit to set automatic commit requirements

This project makes use of pre-commit to do automatic lint and style checking on every commit containing Python files.

To install the pre-commit hook, run the executable from your Python 3 framework while in your current pbench git checkout:

$ cd ~/pbench
$ pip3 install pre-commit
$ pre-commit install --install-hooks

Once installed, all commits will run the test hooks. If your changes fail any of the tests, the commit will be rejected.

Pbench Release Tag Scheme (GitHub)

We employ a simple major, minor, release, build (optional) scheme for tagging starting with the

v0.70.0
release (
v..[-]
). Prior to the v0.70.0 release, the scheme used was mostly
v.
, where we only had minor releases (
Major = 0
).

The practice of using

-agent
or
-server
is also ending with the
v0.70.0
release.

Container Image Tags

This same GitHub "tag" scheme is used with tags applied to container images we build, with the following exceptions for tag names:

  • latest
    - always points to the "latest" container image pushed to a repository
  • v-latest
    - always points to the "latest"
    Major
    released image
  • v.-latest
    - always points to the "latest" release for
    Major
    .
    Minor
    released images
  •  (9 characters) - commit hash of the checked out code

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.