Need help with AICITY2020_DMT_VehicleReID?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

heshuting555
132 Stars 48 Forks MIT License 6 Commits 2 Opened issues

Description

The 3rd Place Submission to AICity Challenge 2020 Track2

Services available

!
?

Need anything else?

Contributors list

No Data

Multi-Domain Learning and Identity Mining for Vehicle Re-Identification

This repository contains our source code of Track2 in the NVIDIA AI City Challenge at CVPR 2020 Workshop. Our paper

Authors

Introduction

Detailed information of NVIDIA AI City Challenge 2020 can be found here.

The code is modified from reid_strong baseline and personreidtiny_baseline.

Get Started

  1. cd
    to folder where you want to download this repo
  2. Run

    git clone https://github.com/heshuting555/AICITY2020_DMT_VehicleReID.git
  3. Install dependencies:

We use cuda 9.0/python 3.6.7/torch 1.2.0/torchvision 0.4.0 for training and testing.

  1. Prepare dataset. we have to change the first line in
    AIC20_track2/AIC20_ReID/train_label.xml
    as below:
   

into

   
  1. ResNet-ibn is applied as the backbone. Download ImageNet pretrained model here

RUN

  1. If you want to get the same score as online in the AI City Challenge 2020 Track2. Use the following commands:
   bash run.sh

Note: you can download our trained model and Distance matrix in the AICITY2020 here

  1. If you want to use our Multi-Domain Learning.
   # you need to train a model in a Multi-Domain Datasets first.(E.g: you can add simulation datasets to aic and then test on the aic)

python train.py --config_file='configs/baseline_aic_finetune.yml' MODEL.PRETRAIN_PATH "('your path for trained checkpoints')" MODEL.DEVICE_ID "('your device id')" OUTPUT_DIR "('your path to save checkpoints and logs')"

  1. If you want to try our Identity Mining.
   # First, genereate the selected query ids

python test_mining.py --config_file='configs/test_identity_mining.yml' TEST.WEIGHT "('your path for trained checkpoints')" OUTPUT_DIR "('your path to save selected query id')" --thresh 0.49

Note: The quality of the query id depends on the performance of TEST.WEIGHT. And you can change the value of thresh to get more or less query ids.

   # Then,  train the model with trainset and testset(selected by the above selected query id)

python train_IM.py --config_file='configs/baseline_aic.yml' --config_file_test='configs/test_train_IM.yml' OUTPUT_DIR "('your path to save checkpoints and logs')" MODEL.THRESH "(0.23)"

Note: you can change the value of MODEL.THRESH which determines how many test sets added to the train sets.

  1. If you want to generate crop images please refer to cropdatasetgenerate directory for detail.

  2. You can visualize the result given a track2.txt result (AICITY required submission format).

   python vis_txt_result.py --base_dir ('your path to the datasets') --result ('result file (txt format) path')
  1. If you want to use our baseline on public datasets (such as VeRi datasets).
   python train.py --config_file='configs/baseline_veri_r50.yml' MODEL.DEVICE_ID "('your device id')" OUTPUT_DIR "('your path to save checkpoints and logs')"

Results (mAP/Rank1)

| Model | AICITY2020 | | -------------------------- | ----------- | | Resnet101ibna (baseline) | 59.73/69.30 | | + Multi-Domain Learning | 65.25/71.96 | | + Identity Mining | 68.54/74.81 | | + Ensemble | 73.22/80.42 |

| Backbone (baseline) | VeRi | download | | -------------------------- | ---------------------------------------------- | ------------------------------------------------------------ | | ResNet50 (batch 48) | 79.8/95.0 | model | log | | Resnet50ibna (batch 48) | 81.4/96.5 | model | log | | Resnet101ibna (batch 48) | 82.8/97.1 | model | log |

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{he2020multi,
 title={Multi-Domain Learning and Identity Mining for Vehicle Re-Identification},
 author={He, Shuting and Luo, Hao and Chen, Weihua and Zhang, Miao and Zhang, Yuqi and Wang, Fan and Li, Hao and Jiang, Wei},
 booktitle={Proc. CVPR Workshops},
 year={2020}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.