Need help with AlphaVideo?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

196 Stars 22 Forks 3 Commits 5 Opened issues


Vision toolbox for video related tasks including action recognition, multi-object tracking.

Services available


Need anything else?

Contributors list


AlphaVideo is an open-sourced video understanding toolbox based on PyTorch covering multi-object tracking and action detection. In AlphaVideo, we released the first one-stage multi-object tracking (MOT) system TubeTK that can achieve 66.9 MOTA on MOT-16 dataset and 63 MOTA on MOT-17 dataset. For action detection, we released an efficient model AlphAction, which is the first open-source project that achieves 30+ mAP (32.4 mAP) with single model on AVA dataset.

Quick Start


Run this command:

pip install alphavideo

from source

Clone repository from github:

git clone alphaVideo
cd alphaVideo

Setup and install AlphaVideo:

pip install .

Features & Capabilities

  • #### Multi-Object Tracking For this task, we provide the TubeTK model which is the official implementation of paper "TubeTK: Adopting Tubes to Track Multi-Object in a One-Step Training Model (CVPR2020, oral)." Detailed training and testing script on MOT-Challenge datasets can be found here.

* Accurate end-to-end multi-object tracking.
* Do not need any ready-made image-level object deteaction models.
* Pre-trained model for pedestrian tracking. 
* Input: Frame list; video.
* Output: Videos decorated by colored bounding-box; Btube lists.
* For details usages, see our [docs](
  • #### Action recognition

For this task, we provide the AlphAction model as an implementation of paper "Asynchronous Interaction Aggregation for Action Detection". This paper is recently accepted by ECCV 2020!

* Accurate and efficient action detection.
* Pre-trained model for 80 atomic action categories defined in [AVA](
* Input: Video; camera.
* Output: Videos decorated by human boxes, attached with corresponding action predictions.
* For details usages, see our [docs](

Paper and Citations

  title={TubeTK: Adopting Tubes to Track Multi-Object in a One-Step Training Model},
  author={Pang, Bo and Li, Yizhuo and Zhang, Yifan and Li, Muchen and Lu, Cewu}

@inproceedings{tang2020asynchronous, title={Asynchronous Interaction Aggregation for Action Detection}, author={Tang, Jiajun and Xia, Jin and Mu, Xinzhi and Pang, Bo and Lu, Cewu}, booktitle={Proceedings of the European conference on computer vision (ECCV)}, year={2020} }


This project is open-sourced and maintained by Machine Vision and Intelligence Group (MVIG) in Shanghai Jiao Tong University.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.