Need help with F3Net?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

weijun88
134 Stars 20 Forks MIT License 5 Commits 10 Opened issues

Description

Codes for the AAAI 2020 paper "F3Net: Fusion, Feedback and Focus for Salient Object Detection"

Services available

!
?

Need anything else?

Contributors list

No Data

F3Net: Fusion, Feedback and Focus for Salient Object Detection

by Jun Wei, Shuhui Wang, Qingming Huang

Introduction

frameworkMost of existing salient object detection models have achieved great progress by aggregating multi-level features extracted from convolutional neural networks. However, because of the different receptive fields of different convolutional layers, there exists big differences between features generated by these layers. Common feature fusion strategies (addition or concatenation) ignore these differences and may cause suboptimal solutions. In this paper, we propose the F3Net to solve above problem, which mainly consists of cross feature module (CFM) and cascaded feedback decoder (CFD) trained by minimizing a new pixel position aware loss (PPA). Specifically, CFM aims to selectively aggregate multi-level features. Different from addition and concatenation, CFM adaptively selects complementary components from input features before fusion, which can effectively avoid introducing too much redundant information that may destroy the original features. Besides, CFD adopts a multi-stage feedback mechanism, where features closed to supervision will be introduced to the output of previous layers to supplement them and eliminate the differences between features. These refined features will go through multiple similar iterations before generating the final saliency maps. Furthermore, different from binary cross entropy, the proposed PPA loss doesn’t treat pixels equally, which can synthesize the local structure information of a pixel to guide the network to focus more on local details. Hard pixels from boundaries or error-prone parts will be given more attention to emphasize their importance. F3Net is able to segment salient object regions accurately and provide clear local details. Comprehensive experiments on five benchmark datasets demonstrate that F3Net outperforms state-of-the-art approaches on six evaluation metrics.

Prerequisites

Clone repository

git clone [email protected]:weijun88/F3Net.git
cd F3Net/

Download dataset

Download the following datasets and unzip them into

data
folder

Download model

  • If you want to test the performance of F3Net, please download the model into
    out
    folder
  • If you want to train your own model, please download the pretrained model into
    res
    folder

Training

    cd src/
    python3 train.py
  • ResNet-50
    is used as the backbone of F3Net and
    DUTS-TR
    is used to train the model
  • batch=32
    ,
    lr=0.05
    ,
    momen=0.9
    ,
    decay=5e-4
    ,
    epoch=32
  • Warm-up and linear decay strategies are used to change the learning rate
    lr
  • After training, the result models will be saved in
    out
    folder

Testing

    cd src
    python3 test.py
  • After testing, saliency maps of
    PASCAL-S
    ,
    ECSSD
    ,
    HKU-IS
    ,
    DUT-OMRON
    ,
    DUTS-TE
    will be saved in
    eval/F3Net/
    folder.

Saliency maps & Trained model

Evaluation

  • To evaluate the performace of F3Net, please use MATLAB to run
    main.m
    shell
    cd eval
    matlab
    main
    
  • Quantitative comparisons performace

  • Qualitative comparisons sample

Citation

  • If you find this work is helpful, please cite our paper
    @inproceedings{F3Net,
    title     = {F3Net: Fusion, Feedback and Focus for Salient Object Detection},
    author    = {Jun Wei, Shuhui Wang, Qingming Huang},
    booktitle = {AAAI Conference on Artificial Intelligence (AAAI)},
    year      = {2020}
    }
    

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.