Need help with CTracker?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

235 Stars 42 Forks Other 18 Commits 21 Opened issues

Services available


Need anything else?

Contributors list

No Data

CTracker (ECCV2020 Spotlight)

Official implementation in PyTorch of Chained-Tracker as described in Chained-Tracker: Chaining Paired Attentive Regression Results for End-to-End Joint Multiple-Object Detection and Tracking.

The introduction video of CTracker is uploaded to Youtube.

The codes is tested with PyTorch 0.4.1. It may not run with other versions.

Video demos on MOT challenge test set


  • Clone this repo into a directory named CTRACKER_ROOT
  • Install the required packages
    apt-get install tk-dev python-tk
  • Install Python dependencies. We use python 3.6.5 and pytorch 0.4.1
    conda create -n CTracker
    conda activate CTracker
    conda install pytorch=0.4.1 cuda90 -c pytorch
    pip install -r requirements.txt
    sh lib/

Organize MOT17 dataset

MOT17 dataset can be downloaded at MOTChallenge.

We uses two CSV files to organize the MOT17 dataset: one file containing annotations and one file containing a class name to ID mapping.

We provide the two CSV files for MOT17 with codes in the CTRACKERROOT/data, you should copy them to MOT17ROOT before starting training.

Dataset structures:

        |    |->MOT17-02/
        |    |->MOT17-04/
        |    |->...
        |    |->MOT17-01/
        |    |->MOT17-03/
        |    |->...

MOT17_ROOT is your path of the MOT17 Dataset.

Annotations format

The CSV file with annotations should contain one annotation per line. Images with multiple bounding boxes should use one row per bounding box. Note that indexing for pixel values starts at 0. The expected format of each line is:


The MOT17 CSV file can be generated by ``` python ``` You can modify this script to handle other datasets.

Class mapping format

The class name to ID mapping file should contain one mapping per line. Each line should use the following format:


Indexing for classes starts at 0. Do not include a background class as it is implicit.

For example:



The network can be trained using the
script. For training on MOT17, use
CUDA_VISIBLE_DEVICES=0 python --root_path MOT17_ROOT --model_dir ./ctracker/ --depth 50

By default, testing will start immediately after training finished.


A trained model is available at Google Drive/Tencent Weiyun, run the following commands to start testing:

CUDA_VISIBLE_DEVICES=0 python --dataset_path MOT17_ROOT --model_dir ./trained_model/


Citing CTracker

If you find CTracker is useful in your project, please consider citing us:

  title={Chained-Tracker: Chaining Paired Attentive Regression Results for End-to-End Joint Multiple-Object Detection and Tracking},
  author={Peng, Jinlong and Wang, Changan and Wan, Fangbin and Wu, Yang and Wang, Yabiao and Tai, Ying and Wang, Chengjie and Li, Jilin and Huang, Feiyue and Fu, Yanwei},
  booktitle={Proceedings of the European Conference on Computer Vision},

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.