Need help with AGNN?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

180 Stars 19 Forks 52 Commits 8 Opened issues


Zero-shot Video Object Segmentation via Attentive Graph Neural Networks (ICCV2019 Oral)

Services available


Need anything else?

Contributors list

# 169,284
52 commits


Code for ICCV 2019 paper: Zero-shot Video Object Segmentation via Attentive Graph Neural Networks

Quick Start


  1. Download all the training datasets, including MARA10K (split the RGB images and masks into two files) and DUT saliency datasets. Create a folder called images and put these datasets into the folder (data augmentation is suggested for these static images). Download the davis2016 dataset.

  2. Download the deeplabv3 model from GoogleDrive. Put it into the folder pretrained/deep_labv3.

  3. Change the video path, image path and deeplabv3 path in Create two txt files which store the saliency dataset name and DAVIS16 training sequences name. Change the txt path in

  4. Run command: python --dataset davis --gpus 0,1


For Object level zero-shot VOS:

  1. Install pytorch (version:1.0.1).

  2. Download the pretrained model, put in the snapshots folder. Run '' and change the davis dataset path, pretrainde model path and result path.

  3. Run command: python --dataset davis --gpus 0

  4. Post CRF processing code: (scale=1 for unary,sdims = 1, compat=5 for pairwise Gaussian, sdims=30, schan=5, compat=9 ) The pretrained weight can be download from GoogleDrive.

For instance-level zero-shot VOS (multiple instances):

  1. Download DAVIS-2017 dataset and run the object level zero-shot VOS for each video. In this way, we can obtain the object-level mask for each frame.

  2. Download the code of PWCNet from here and compute the optical flow for each video.

  3. Download the code of PReMVOS from here. Run the proposal generation and combination code with the provided network. In this way, we can obtain the instance level proposals for each frame.

  4. Run the command to select the foreground instances from the first frame for each video and generate related json and jpeg file. Copy this file to PReMVOS and make a new file called my_data.

  5. Run the code of refinement_net in PReMVOS and generate the mask for each instance.

  6. Change the path of first frame as well as annotation in MergeTrack/ Run the mergetrack code to associate the instance mask across the subsequent frames.

The segmentation results on DAVIS-2016, Youtube-objects and DAVIS-2017 datasets can be download from GoogleDiver.


If you find the code and dataset useful in your research, please consider citing:

author = {Wang, Wenguan and Lu, Xiankai and Shen, Jianbing and Crandall, David J. and Shao, Ling},  
title = {Zero-Shot Video Object Segmentation via Attentive Graph Neural Networks},  
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},  
year = {2019}  

Other related projects/papers:

See More, Know More: Unsupervised Video Object Segmentation with Co-Attention Siamese Networks(CVPR19)

Saliency-Aware Geodesic Video Object Segmentation (CVPR15)

Learning Unsupervised Video Primary Object Segmentation through Visual Attention (CVPR19)

Any comments, please email: [email protected]

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.