Need help with gapbs?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

163 Stars 75 Forks Other 154 Commits 0 Opened issues


GAP Benchmark Suite

Services available


Need anything else?

Contributors list

# 141,282
138 commits
# 265,382
3 commits
# 524,878
3 commits
# 563,308
2 commits
# 564,959
1 commit
# 633,523
1 commit
# 625,310
1 commit

GAP Benchmark Suite Build Status

This is the reference implementation for the GAP Benchmark Suite. It is designed to be a portable high-performance baseline that only requires a compiler with support for C++11. It uses OpenMP for parallelism, but it can be compiled without OpenMP to run serially. The details of the benchmark can be found in the specification.

The GAP Benchmark Suite is intended to help graph processing research by standardizing evaluations. Fewer differences between graph processing evaluations will make it easier to compare different research efforts and quantify improvements. The benchmark not only specifies graph kernels, input graphs, and evaluation methodologies, but it also provides an optimized baseline implementation (this repo). These baseline implementations are representative of state-of-the-art performance, and thus new contributions should outperform them to demonstrate an improvement.

Kernels Included

  • Breadth-First Search (BFS) - direction optimizing
  • Single-Source Shortest Paths (SSSP) - delta stepping
  • PageRank (PR) - iterative method in pull direction
  • Connected Components (CC) - Afforest & Shiloach-Vishkin
  • Betweenness Centrality (BC) - Brandes
  • Triangle Counting (TC) - Order invariant with possible relabelling

Quick Start

Build the project:

$ make

Override the default C++ compiler:

$ CXX=g++-8 make

Test the build:

$ make test

Run BFS on 1,024 vertices for 1 iteration:

$ ./bfs -g 10 -n 1

Additional command line flags can be found with


Graph Loading

All of the binaries use the same command-line options for loading graphs: +

-g 20
generates a Kronecker graph with 2^20 vertices (Graph500 specifications) +
-u 20
generates a uniform random graph with 2^20 vertices (degree 16) +
-f graph.el
loads graph from file graph.el +
-sf graph.el
symmetrizes graph loaded from file graph.el

The graph loading infrastructure understands the following formats: +

plain-text edge-list with an edge per line as node1 node2 +
plain-text weighted edge-list with an edge per line as node1 node2 weight +
9th DIMACS Implementation Challenge format +
Metis format (used in 10th DIMACS Implementation Challenge) +
Matrix Market format +
serialized pre-built graph (use
to make) +
weighted serialized pre-built graph (use
to make)

Executing the Benchmark

We provide a simple makefile-based approach to automate executing the benchmark which includes fetching and building the input graphs. Using these makefiles is not a requirement of the benchmark, but we provide them as a starting point. For example, a user could save disk space by storing the input graphs in fewer formats at the expense of longer loading and conversion times. Anything that complies with the rules in the specification is allowed by the benchmark.

Warning: A full run of this benchmark can be demanding and should probably not be done on a laptop. Building the input graphs requires about 275 GB of disk space and 64 GB of RAM. Depending on your filesystem and internet bandwidth, building the graphs can take up to 8 hours. Once the input graphs are built, you can delete

to free up some disk space. Executing the benchmark itself will require only a few hours.

Build the input graphs:

$ make bench-graphs

Execute the benchmark suite:

$ make bench-run


The GAP Benchmark Suite is also included in the Spack package manager. To install:

$ spack install gapbs

How to Cite

Please cite this code by the benchmark specification:

Scott Beamer, Krste Asanović, David Patterson. The GAP Benchmark Suite. arXiv:1508.03619 [cs.DC], 2015.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.