Need help with OSVOS-TensorFlow?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

430 Stars 135 Forks GNU General Public License v3.0 33 Commits 2 Opened issues


One-Shot Video Object Segmentation

Services available


Need anything else?

Contributors list

# 135,146
21 commits
# 129,354
6 commits
# 244,152
3 commits

OSVOS: One-Shot Video Object Segmentation

Check our project page for additional information. OSVOS

OSVOS is a method that tackles the task of semi-supervised video object segmentation. It is based on a fully-convolutional neural network architecture that is able to successively transfer generic semantic information, learned on ImageNet, to the task of foreground segmentation, and finally to learning the appearance of a single annotated object of the test sequence (hence one-shot). Experiments on DAVIS 2016 show that OSVOS is faster than currently available techniques and improves the state of the art by a significant margin (79.8% vs 68.0%).

This TensorFlow code is a posteriori implementation of OSVOS and it does not contain the boundary snapping branch. The results published in the paper were obtained using the Caffe version that can be found at OSVOS-caffe.

NEW: PyTorch implementation also available: OSVOS-PyTorch!


  1. Clone the OSVOS-TensorFlow repository
    git clone
  2. Install if necessary the required dependencies:
  • Python 2.7, Python 3 (thanks to @xoltar)
  • Tensorflow r1.0 or higher (
    pip install tensorflow-gpu
    ) along with standard dependencies
  • Other python dependencies: PIL (Pillow version), numpy, scipy, matplotlib, six
  1. Download the parent model from here (55 MB) and unzip it under

    (It should create a folder named 'OSVOS_parent').
  2. All the steps to re-train OSVOS are provided in this repository. In case you would like to test with the pre-trained models, you can download them from here (2.2GB) and unzip them under

    (It should create a folder for every model).

Demo online training and testing

  1. Edit in file
    the 'User defined parameters' (eg. gpuid, trainmodel, etc).
  2. Run


It is possible to work with all sequences of DAVIS 2016 just by creating a soft link (

ln -s /path/to/DAVIS/  DAVIS
) in the root folder of the project.

Training the parent network (optional)

  1. All the training sequences of DAVIS 2016 are required to train the parent model, thus download it from here if you don't have it.
  2. Place the dataset in this repository or create a soft link to it (
    ln -s /path/to/DAVIS/ DAVIS
    ) if you have it somewhere else.
  3. Download the VGG 16 model trained on Imagenet from the TF model zoo from here.
  4. Place the vgg_16.ckpt file inside
  5. Edit the 'User defined parameters' (eg. gpuid) in file ``.
  6. Run
    . This step takes 20 hours to train (Titan-X Pascal), and ~15GB for loading data and online data augmentation. Change accordingly, to adjust to a less memory-intensive setup.

Have a happy training!


  Title          = {One-Shot Video Object Segmentation},
  Author         = {S. Caelles and K.K. Maninis and J. Pont-Tuset and L. Leal-Taix\'e and D. Cremers and L. {Van Gool}},
  Booktitle      = {Computer Vision and Pattern Recognition (CVPR)},
  Year           = {2017}

If you encounter any problems with the code, want to report bugs, etc. please contact me at scaelles[at]vision[dot]ee[dot]ethz[dot]ch.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.