Need help with superpixel_fcn?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

270 Stars 67 Forks Other 3 Commits 0 Opened issues


[CVPR‘20] SpixelFCN: Superpixel Segmentation with Fully Convolutional Network

Services available


Need anything else?

Contributors list

No Data

SpixelFCN: Superpixel Segmentation with Fully Convolutional Network

This is is a PyTorch implementation of the superpixel segmentation network introduced in our CVPR-20 paper:

Superpixel Segmentation with Fully Convolutional Network

Fengting Yang, Qian Sun, Hailin Jin, and Zihan Zhou

Please contact Fengting Yang ([email protected]) if you have any question.


The training code was mainly developed and tested with python 2.7, PyTorch 0.4.1, CUDA 9, and Ubuntu 16.04.

During test, we make use of the component connection method in SSN to enforce the connectivity in superpixels. The code has been included in

. To compile it:
cd third_party/cython/
python install --user
cd ../..


The demo script
provides the superpixels with grid size
16 x 16
using our pre-trained model (in
). Please feel free to provide your own images by copying them into
, and run
python --data_dir=./demo/inputs --data_suffix=jpg --output=./demo 
The results will be generate in a new folder under

Data preparation

To generate training and test dataset, please first download the data from the original BSDS500 dataset, and extract it to

. Then, run 
cd data_preprocessing
python --dataset= --dump_root=
python --dataset= --dump_root=
cd ..
The code will generate three folders under the
, named as 
, and
, and three
files record the absolute path of the images, named as
, and


Once the data is prepared, we should be able to train the model by running the following command

python --data= --savepath=

if we wish to continue a train process or fine-tune from a pre-trained model, we can run

python --data= --savepath= --pretrained= 
The code will start from the recorded status, which includes the optimizer status and epoch number.

The training log can be viewed from the

session by running
tensorboard --logdir= --port=8888

If everything is set up properly, reasonable segmentation should be observed after 10 epochs.


We provide test code to generate: 1) superpixel visualization and 2) the

files for evaluation.

To test on BSDS500, run

python --data_dir= --output= --pretrained=

To test on NYUv2, please first extract our pre-processed dataset from

 , or follow the intruction on the superpixel benchmark
 to generate the test dataset, and then run
python --data_dir= --output= --pretrained=

To test on other datasets, please first collect all the images into one folder

, and then convert them into the same 
format (e.g. 
) if necessary, and run
python --data_dir= --data_suffix= --output= --pretrained=
Superpixels with grid size
16 x 16
will be generated by default. To generate the superpixel with a different grid size, we simply need to resize the images into the approporate resolution before passing them through the code. Please refer to
for the details.


We use the code from superpixel benchmark for superpixel evaluation. A detailed instruction is available in the repository, please

(1) download the code and build it accordingly;

(2) edit the variables


(3) run

cp /eval_spixel/ /examples/bash/
cd  /examples/
several files should be generated in the
folders in the corresponding test outputs;

(4) run

cd eval_spixel
python --src= --dst=
(5) open
, set the
 and modify the 
according to the test setting. The default setting is for our BSDS500 test set.;

(6) run the

, the
ASA Score
CO Score
, and
BR-BP curve
of our method should be shown on the screen. If you wish to compare our method with the others, you can first run the method and organize the data as we state above, and uncomment the code in the
to generate a similar figure shown in our papers.


Our code is developed based on the training framework provided by FlowNetPytorch.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.