Need help with Deep-Feature-video?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.


Deep Feature Flow for Video Recognition

211 Stars 2 Forks MIT License 1.5K Commits 1 Opened issues

Services available

Need anything else?

Deep Feature Flow for Video Recognition


Deep Feature Flow is initially described in a [CVPR 2017 paper]

It provides a simple, fast, accurate, and end-to-end framework for video recognition (e.g., object detection and semantic segmentation in videos). It is worth noting that:

  • Deep Feature Flow significantly speeds up video recognition by applying the heavy-weight image recognition network (e.g., ResNet-101) on sparse key frames, and propagating the recognition outputs (feature maps) to the other frames by the light-weight flow network (e.g., [FlowNet].
  • The entire system is end-to-end trained for the task of video recognition, which is vital for improving the recognition accuracy. Directly adopting state-of-the-art flow estimation methods without end-to-end training would deliver noticable worse results.
  • Deep Feature Flow can easily make use of sparsely annotated video recognition datasets, where only a small portion of the frames are annotated with ground-truth labels.

Click image to watch our demo video

[Demo Video on YouTube]

[Demo Video on YouTube]


This is an official implementation for Deep Feature Flow for Video Recognition (DFF) based on MXNet. It is worth noticing that:

  • The original implementation is based on our internal Caffe version on Windows. There are slight differences in the final accuracy and running time due to the plenty details in platform switch.


© Microsoft, 2018. Licensed under the MIT License.

Citing Deep Feature Flow

If you find Deep Feature Flow useful in your research, please consider citing: ``` @inproceedings{zhu17dff, Author = {Xizhou Zhu, Yuwen Xiong, Jifeng Dai, Lu Yuan, Yichen Wei}, Title = {Deep Feature Flow for Video Recognition}, Conference = {CVPR}, Year = {2017} }

@inproceedings{dai16rfcn, Author = {Jifeng Dai, Yi Li, Kaiming He, Jian Sun}, Title = {{R-FCN}: Object Detection via Region-based Fully Convolutional Networks}, Conference = {NIPS}, Year = {2016} } ```

Main Results

| | training data | testing data | [email protected] | time/image (Tesla K40) | time/image(Maxwell Titan X) | |---------------------------------|-------------------|--------------|---------|---------|--------| | Frame baseline(R-FCN, ResNet-v1-101) | ImageNet DET train + VID train | ImageNet VID validation | 74.1 | 0.271s | 0.133s | | Deep Feature Flow(R-FCN, ResNet-v1-101, FlowNet) | ImageNet DET train + VID train | ImageNet VID validation | 73.0 | 0.073s | 0.034s |

Running time is counted on a single GPU (mini-batch size is 1 in inference, key-frame duration length for Deep Feature Flow is 10).

The runtime of the light-weight FlowNet seems to be a bit slower on MXNet than that on Caffe.

Requirements: Software

  1. MXNet from Due to the rapid development of MXNet, it is recommended to checkout this version if you encounter any issues. We may maintain this repository periodically if MXNet adds important feature in future release.

  2. Python 2.7. We recommend using Anaconda2 as it already includes many common packages. We do not suppoort Python 3 yet, if you want to use Python 3 you need to modify the code to make it work.

  3. Python packages might missing: cython, opencv-python >= 3.2.0, easydict. If

    is set up on your system, those packages should be able to be fetched and installed by running
    pip install Cython
    pip install opencv-python==
    pip install easydict==1.6
  4. For Windows users, Visual Studio 2015 is needed to compile cython module.

Requirements: Hardware

Any NVIDIA GPUs with at least 6GB memory should be OK


  1. Clone the Deep Feature Flow repository, and we'll call the directory that you cloned Deep-Feature-Flow as ${DFF_ROOT}.
git clone
  1. For Windows users, run

    cmd .\init.bat
    . For Linux user, run
    sh ./
    . The scripts will build cython module automatically and create some folders.
  2. Install MXNet:

    3.1 Clone MXNet and checkout to [[email protected](commit 62ecb60)] by

    git clone --recursive
    git checkout 62ecb60
    git submodule update
    3.2 Copy operators in
    cp -r $(DFF_ROOT)/dff_rfcn/operator_cxx/* $(MXNET_ROOT)/src/operator/contrib/
    3.3 Compile MXNet
    cd ${MXNET_ROOT}
    make -j4
    3.4 Install the MXNet Python binding by

    Note: If you will actively switch between different versions of MXNet, please follow 3.5 instead of 3.4

    cd python
    sudo python install
    3.5 For advanced users, you may put your Python packge into
    , and modify
    . Thus you can switch among different versions of MXNet quickly.


  1. To run the demo with our trained model (on ImageNet DET + VID train), please download the model manually from OneDrive, and put it under folder


    Make sure it looks like this:

  2. Run (inference batch size = 1)

    python ./rfcn/
    python ./dff_rfcn/
    or run (inference batch size = 10)
    python ./rfcn/
    python ./dff_rfcn/

Preparation for Training & Testing

  1. Please download ILSVRC2015 DET and ILSVRC2015 VID dataset, and make sure it looks like this:



Q: It says

AttributeError: 'module' object has no attribute 'MultiProposal'

A: This is because either - you forget to copy the operators to your MXNet folder - or you copy to the wrong path - or you forget to re-compile and install - or you install the wrong MXNet

Please print `mxnet.__path__` to make sure you use correct MXNet

Q: I encounter

segment fault
at the beginning.

A: A compatibility issue has been identified between MXNet and opencv-python 3.0+. We suggest that you always

import cv2
first before
import mxnet
in the entry script.

Q: I find the training speed becomes slower when training for a long time.

A: It has been identified that MXNet on Windows has this problem. So we recommend to run this program on Linux. You could also stop it and resume the training process to regain the training speed if you encounter this problem.

Q: Can you share your caffe implementation?

A: Due to several reasons (code is based on a old, internal Caffe, port to public Caffe needs extra work, time limit, etc.). We do not plan to release our Caffe code. Since a warping layer is easy to implement, anyone who wish to do it is welcome to make a pull request.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.