Actions as Moving Points
Pytorch implementation of Actions as Moving Points (ECCV 2020).
View each action instance as a trajectory of moving points.
Visualization results on validation set. (GIFs will take a few minutes to load......)
(Note that the relative low scores are due to the property of the focal loss.)
Jul. 08, 2020 - First release of codes.
Jul. 24, 2020 - Update ucf-pretrained JHMDB model and speed test codes.
Aug. 02, 2020 - Update visualization codes. Extract frames from a video and get the detection result (like above gifs).
Aug. 17, 2020 - Now our visualization supports instance level detection results (reflects video mAP).
Aug. 23, 2020 - We upload MOC with ResNet-18 in Backbone.
We present a new action tubelet detection framework, termed as MovingCenter Detector (MOC-detector), by treating an action instance as a trajectory of moving points. MOC-detector is decomposed into three crucial head branches:
Please refer to Installation.md for installation instructions.
Please refer to Dataset.md for dataset setup instructions.
You can follow the instructions in Evaluation.md to evaluate our model and reproduce the results in original paper.
You can follow the instructions in Train.md to train our models.
You can follow the instructions in Visualization.md to get visualization results.
See more in NOTICE
If you find this code is useful in your research, please cite:
@InProceedings{li2020actions, title={Actions as Moving Points}, author={Yixuan Li and Zixu Wang and Limin Wang and Gangshan Wu}, booktitle={arXiv preprint arXiv:2001.04608}, year={2020} }