Need help with MOC-Detector?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

194 Stars 32 Forks MIT License 82 Commits 12 Opened issues


[ECCV 2020] Actions as Moving Points

Services available


Need anything else?

Contributors list

Actions as Moving Points

Pytorch implementation of Actions as Moving Points (ECCV 2020).

View each action instance as a trajectory of moving points.

Visualization results on validation set. (GIFs will take a few minutes to load......)

(Note that the relative low scores are due to the property of the focal loss.)

News & Updates

Jul. 08, 2020 - First release of codes.

Jul. 24, 2020 - Update ucf-pretrained JHMDB model and speed test codes.

Aug. 02, 2020 - Update visualization codes. Extract frames from a video and get the detection result (like above gifs).

Aug. 17, 2020 - Now our visualization supports instance level detection results (reflects video mAP).

Aug. 23, 2020 - We upload MOC with ResNet-18 in Backbone.

MOC Detector Overview

  We present a new action tubelet detection framework, termed as MovingCenter Detector (MOC-detector), by treating an action instance as a trajectory of moving points. MOC-detector is decomposed into three crucial head branches:

  • (1) Center Branch for instance center detection and action recognition.
  • (2) Movement Branch for movement estimation at adjacent frames to form moving point trajectories.
  • (3) Box Branch for spatial extent detection by directly regressing bounding box size at the estimated center point of each frame.

MOC-Detector Usage

1. Installation

Please refer to for installation instructions.

2. Dataset

Please refer to for dataset setup instructions.

3. Evaluation

You can follow the instructions in to evaluate our model and reproduce the results in original paper.

4. Train

You can follow the instructions in to train our models.

5. Visualization

You can follow the instructions in to get visualization results.


  • Data augmentation codes from ACT.

  • Evaluation codes from ACT.

  • DLA-34 backbone codes from CenterNet.



See more in NOTICE


If you find this code is useful in your research, please cite:

    title={Actions as Moving Points},
    author={Yixuan Li and Zixu Wang and Limin Wang and Gangshan Wu},
    booktitle={arXiv preprint arXiv:2001.04608},

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.