An extensive evaluation and comparison of 28 state-of-the-art superpixel algorithms on 5 datasets.
This repository contains the source code used for evaluation in , a large-scale comparison of state-of-the-art superpixel algorithms.
Please cite the following work if you use this benchmark or the provided tools or implementations:
 D. Stutz, A. Hermans, B. Leibe. Superpixels: An Evaluation of the State-of-the-Art. Computer Vision and Image Understanding, 2018.
Also make also sure to cite additional papers when using datasets or superpixel algorithms.
lib_eval/evaluation.hand an easy-to-use command line tool is provided, see
eval_average_cliand the corresponding documentation and examples in Executables and Examples respectively.
Superpixels group pixels similar in color and other low-level properties. In this respect, superpixels address two problems inherent to the processing of digital images: firstly, pixels are merely a result of discretization; and secondly, the high number of pixels in large images prevents many algorithms from being computationally feasible. Superpixels were introduced as more natural entities - grouping pixels which perceptually belong together while heavily reducing the number of primitives.
This repository can be understood as supplementary material for an extensive evaluation of 28 algorithms on 5 datasets regarding visual quality, performance, runtime, implementation details and robustness - as presented in . To ensure a fair comparison, parameters have been optimized on separate training sets; as the number of generated superpixels heavily influences parameter optimization, we additionally enforced connectivity. Furthermore, to evaluate superpixel algorithms independent of the number of superpixels, we propose to integrate over commonly used metrics such as Boundary Recall, Undersegmentation Error and Explained Variation. Finally, we present a ranking of the superpixel algorithms considering multiple metrics and independent of the number of generated superpixels, as shown below.
The table shows the average ranks across the 5 datasets, taking into account Average Boundary Recall (ARec) and Average Undersegmentation Error (AUE) - lower is better in both cases, see Benchmark. The confusion matrix shows the rank distribution of the algorithms across the datasets.
The following algorithms were evaluated in , and most of them are included in this repository:
|:ballotboxwithcheck:||CCS||Ref. & Web|
|Instructions||CIS||Ref. & Web|
|:ballotboxwithcheck:||CRS||Ref. & Web|
|:ballotboxwithcheck:||CW||Ref. & Web|
|:ballotboxwithcheck:||DASP||Ref. & Web|
|:ballotboxwithcheck:||EAMS||Ref., Ref., Ref. & Web|
|:ballotboxwithcheck:||ERS||Ref. & Web|
|:ballotboxwithcheck:||FH||Ref. & Web|
|:ballotboxwithcheck:||PB||Ref. & Web|
|:ballotboxwithcheck:||preSLIC||Ref. & Web|
|:ballotboxwithcheck:||SEAW||Ref. & Web|
|:ballotboxwithcheck:||SEEDS||Ref. & Web|
|:ballotboxwithcheck:||SLIC||Ref. & Web|
|:ballotboxwithcheck:||TP||Ref. & Web|
|:ballotboxwithcheck:||TPS||Ref. & Web|
|:ballotboxwithcheck:||WP||Ref. & Web|
|:ballotboxwithcheck:||PF||Ref. & Web|
|:ballotboxwithcheck:||LSC||Ref. & Web|
|:ballotboxwithcheck:||RW||Ref. & Web|
|:ballotboxwithcheck:||QS||Ref. & Web|
|:ballotboxwithcheck:||NC||Ref. & Web|
|:ballotboxwithcheck:||VCCS||Ref. & Web|
|:ballotboxwithcheck:||POISE||Ref. & Web|
|:ballotboxwithcheck:||VC||Ref. & Web|
|:ballotboxwithcheck:||ETPS||Ref. & Web|
|:ballotboxwith_check:||ERGC||Ref., Ref. & Web|
To keep the benchmark alive, we encourage authors to make their implementations publicly available and integrate them into this benchmark. We are happy to help with the integration and update the results published in  and on the project page. Also see the Documentation for details.
Licenses for source code corresponding to:
D. Stutz, A. Hermans, B. Leibe. Superpixels: An Evaluation of the State-of-the-Art. Computer Vision and Image Understanding, 2018.
Note that the source code/data is based on other projects for which separate licenses apply, see:
Copyright (c) 2016-2018 David Stutz, RWTH Aachen University
Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use this software and associated documentation files (the "Software").
The authors hereby grant you a non-exclusive, non-transferable, free of charge right to copy, modify, merge, publish, distribute, and sublicense the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.
Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artefacts for commercial purposes.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
You understand and agree that the authors are under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. The authors nevertheless reserve the right to update, modify, or discontinue the Software at any time.
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. You agree to cite the corresponding papers (see above) in documents and papers that report on research using the Software.