Need help with SpCL?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

yxgeee
163 Stars 26 Forks MIT License 14 Commits 7 Opened issues

Description

[NeurIPS-2020] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID.

Services available

!
?

Need anything else?

Contributors list

# 100,056
Python
Shell
cross-d...
person-...
12 commits
# 88,731
Jupyter...
apex
image-r...
pytorch
1 commit

Python >=3.5 PyTorch >=1.0

Self-paced Contrastive Learning (SpCL)

The official repository for Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID, which is accepted by NeurIPS-2020.

SpCL
achieves state-of-the-art performances on both unsupervised domain adaptation tasks and unsupervised learning tasks for object re-ID, including person re-ID and vehicle re-ID.

framework

Updates

[2020-10-13] All trained models for the camera-ready version have been updated, see Trained Models for details.

[2020-09-25]

SpCL
has been accepted by NeurIPS on the condition that experiments on DukeMTMC-reID dataset should be removed, since the dataset has been taken down and should no longer be used.

[2020-07-01] We did the code refactoring to support distributed training, stronger performances and more features. Please see OpenUnReID.

Requirements

Installation

git clone https://github.com/yxgeee/SpCL.git
cd SpCL
python setup.py develop

Prepare Datasets

cd examples && mkdir data

Download the person datasets Market-1501, MSMT17, PersonX, and the vehicle datasets VehicleID, VeRi-776, VehicleX. Then unzip them under the directory like

SpCL/examples/data
├── market1501
│   └── Market-1501-v15.09.15
├── msmt17
│   └── MSMT17_V1
├── personx
│   └── PersonX
├── vehicleid
│   └── VehicleID -> VehicleID_V1.0
├── vehiclex
│   └── AIC20_ReID_Simulation -> AIC20_track2/AIC20_ReID_Simulation
└── veri
    └── VeRi -> VeRi_with_plate

Prepare ImageNet Pre-trained Models for IBN-Net

When training with the backbone of IBN-ResNet, you need to download the ImageNet-pretrained model from this link and save it under the path of

logs/pretrained/
.
shell
mkdir logs && cd logs
mkdir pretrained
The file tree should be
SpCL/logs
└── pretrained
    └── resnet50_ibn_a.pth.tar
ImageNet-pretrained models for ResNet-50 will be automatically downloaded in the python script.

Training

We utilize 4 GTX-1080TI GPUs for training. Note that

  • The training for
    SpCL
    is end-to-end, which means that no source-domain pre-training is required.
  • use
    --iters 400
    (default) for Market-1501 and PersonX datasets, and
    --iters 800
    for MSMT17, VeRi-776, VehicleID and VehicleX datasets;
  • use
    --width 128 --height 256
    (default) for person datasets, and
    --height 224 --width 224
    for vehicle datasets;
  • use
    -a resnet50
    (default) for the backbone of ResNet-50, and
    -a resnet_ibn50a
    for the backbone of IBN-ResNet.

Unsupervised Domain Adaptation

To train the model(s) in the paper, run this command:

shell
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_uda.py \
  -ds $SOURCE_DATASET -dt $TARGET_DATASET --logs-dir $PATH_OF_LOGS

Some examples: ```shell

PersonX -> Market-1501

use all default settings is ok

CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainuda.py \ -ds personx -dt market1501 --logs-dir logs/spcluda/personx2marketresnet50

Market-1501 -> MSMT17

use all default settings except for iters=800

CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainuda.py --iters 800 \ -ds market1501 -dt msmt17 --logs-dir logs/spcluda/market2msmtresnet50

VehicleID -> VeRi-776

use all default settings except for iters=800, height=224 and width=224

CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainuda.py --iters 800 --height 224 --width 224 \ -ds vehicleid -dt veri --logs-dir logs/spcluda/vehicleid2veriresnet50 ```

Unsupervised Learning

To train the model(s) in the paper, run this command:

shell
CUDA_VISIBLE_DEVICES=0,1,2,3 \
python examples/spcl_train_usl.py \
  -d $DATASET --logs-dir $PATH_OF_LOGS

Some examples: ```shell

Market-1501

use all default settings is ok

CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainusl.py \ -d market1501 --logs-dir logs/spclusl/marketresnet50

MSMT17

use all default settings except for iters=800

CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainusl.py --iters 800 \ -d msmt17 --logs-dir logs/spclusl/msmtresnet50

VeRi-776

use all default settings except for iters=800, height=224 and width=224

CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainusl.py --iters 800 --height 224 --width 224 \ -d veri --logs-dir logs/spclusl/veriresnet50 ```

Evaluation

We utilize 1 GTX-1080TI GPU for testing. Note that

  • use
    --width 128 --height 256
    (default) for person datasets, and
    --height 224 --width 224
    for vehicle datasets;
  • use
    --dsbn
    for domain adaptive models, and add
    --test-source
    if you want to test on the source domain;
  • use
    -a resnet50
    (default) for the backbone of ResNet-50, and
    -a resnet_ibn50a
    for the backbone of IBN-ResNet.

Unsupervised Domain Adaptation

To evaluate the domain adaptive model on the target-domain dataset, run:

shell
CUDA_VISIBLE_DEVICES=0 \
python examples/test.py --dsbn \
  -d $DATASET --resume $PATH_OF_MODEL

To evaluate the domain adaptive model on the source-domain dataset, run:

shell
CUDA_VISIBLE_DEVICES=0 \
python examples/test.py --dsbn --test-source \
  -d $DATASET --resume $PATH_OF_MODEL

Some examples: ```shell

Market-1501 -> MSMT17

test on the target domain

CUDAVISIBLEDEVICES=0 \ python examples/test.py --dsbn \ -d msmt17 --resume logs/spcluda/market2msmtresnet50/model_best.pth.tar

test on the source domain

CUDAVISIBLEDEVICES=0 \ python examples/test.py --dsbn --test-source \ -d market1501 --resume logs/spcluda/market2msmtresnet50/model_best.pth.tar ```

Unsupervised Learning

To evaluate the model, run:

shell
CUDA_VISIBLE_DEVICES=0 \
python examples/test.py \
  -d $DATASET --resume $PATH

Some examples: ```shell

Market-1501

CUDAVISIBLEDEVICES=0 \ python examples/test.py \ -d market1501 --resume logs/spclusl/marketresnet50/model_best.pth.tar ```

Trained Models

framework

You can download the above models in the paper from [Google Drive] or [Baidu Yun](password: w3l9).

Citation

If you find this code useful for your research, please cite our paper

@inproceedings{ge2020selfpaced,
    title={Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID},
    author={Yixiao Ge and Feng Zhu and Dapeng Chen and Rui Zhao and Hongsheng Li},
    booktitle={Advances in Neural Information Processing Systems},
    year={2020}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.