Need help with PSANet?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

hszhao
205 Stars 35 Forks 4 Commits 0 Opened issues

Description

PSANet: Point-wise Spatial Attention Network for Scene Parsing, ECCV2018.

Services available

!
?

Need anything else?

Contributors list

# 49,176
C++
MATLAB
Shell
3 commits

PSANet: Point-wise Spatial Attention Network for Scene Parsing (in construction)

by Hengshuang Zhao*, Yi Zhang*, Shu Liu, Jianping Shi, Chen Change Loy, Dahua Lin, Jiaya Jia, details are in project page.

Introduction

This repository is build for PSANet, which contains source code for PSA module and related evaluation code. For installation, please merge the related layers and follow the description in PSPNet repository (test with CUDA 7.0/7.5 + cuDNN v4).

PyTorch Version

Highly optimized PyTorch codebases available for semantic segmentation in repo: semseg, including full training and testing codes for PSPNet and PSANet.

Usage

  1. Clone the repository recursively:
   git clone --recursive https://github.com/hszhao/PSANet.git
  1. Merge the caffe layers into PSPNet repository:

Point-wise spatial attention: pointwisespatialattention_layer.hpp/cpp/cu and caffe.proto.

  1. Build Caffe and matcaffe:
   cd $PSANET_ROOT/PSPNet
   cp Makefile.config.example Makefile.config
   vim Makefile.config
   make -j8 && make matcaffe
   cd ..
  1. Evaluation:
  • Evaluation code is in folder 'evaluation'.
  • Download trained models and put them in related dataset folder under 'evaluation/model', refer 'README.md'.
  • Modify the related paths in 'eval_all.m':

    Mainly variables 'dataroot' and 'evallist', and your image list for evaluation should be similarity to that in folder 'evaluation/samplelist' if you use this evaluation code structure.

   cd evaluation
   vim eval_all.m
  • Run the evaluation scripts:
   ./run.sh
  1. Results:

Predictions will show in folder 'evaluation/mc_result' and the expected scores are listed as below:

(mIoU/pAcc. stands for mean IoU and pixel accuracy, 'ss' and 'ms' denote single scale and multiple scale testing.)

ADE20K:

| network | training data | testing data | mIoU/pAcc.(ss) | mIoU/pAcc.(ms) | md5sum | | :-------: | :-----------: | :----------: | :------------: | :------------: | :----------------------------------------------------------: | | PSANet50 | train | val | 41.92/80.17 | 42.97/80.92 | a8e884 | | PSANet101 | train | val | 42.75/80.71 | 43.77/81.51 | ab5e56 |

VOC2012:

| network | training data | testing data | mIoU/pAcc.(ss) | mIoU/pAcc.(ms) | md5sum | | :-------: | :--------------------: | :----------: | :------------: | :------------: | :----------------------------------------------------------: | | PSANet50 | trainaug | val | 77.24/94.88 | 78.14/95.12 | d5fc37 | | PSANet101 | trainaug | val | 78.51/95.18 | 79.77/95.43 | 5d8c0f | | PSANet101 | COCO + train_aug + val | test | -/- | 85.7/- | 3c6a69 |

Cityscapes:

| network | training data | testing data | mIoU/pAcc.(ss) | mIoU/pAcc.(ms) | md5sum | | :-------: | :-------------------: | :----------: | :------------: | :------------: | :----------------------------------------------------------: | | PSANet50 | finetrain | fineval | 76.65/95.99 | 77.79/96.24 | 25c06a | | PSANet101 | finetrain | fineval | 77.94/96.10 | 79.05/96.30 | 3ac1bf | | PSANet101 | finetrain | finetest | -/- | 78.6/- | 3ac1bf | | PSANet101 | finetrain + fineval | fine_test | -/- | 80.1/- | 1dfc91 |

  1. Demo video:
  • Video processed by PSANet (with PSPNet) on BDD dataset for drivable area segmentation: Video.

Citation

If PSANet is useful for your research, please consider citing:

@inproceedings{zhao2018psanet,
  title={{PSANet}: Point-wise Spatial Attention Network for Scene Parsing},
  author={Zhao, Hengshuang and Zhang, Yi and Liu, Shu and Shi, Jianping and Loy, Chen Change and Lin, Dahua and Jia, Jiaya},
  booktitle={ECCV},
  year={2018}
}

Questions

Please contact '[email protected]' or '[email protected]'

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.