[NeurIPS-2020] Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID.
The official repository for Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID, which is accepted by NeurIPS-2020.
SpCLachieves state-of-the-art performances on both unsupervised domain adaptation tasks and unsupervised learning tasks for object re-ID, including person re-ID and vehicle re-ID.
[2020-10-13] All trained models for the camera-ready version have been updated, see Trained Models for details.
[2020-09-25]
SpCLhas been accepted by NeurIPS on the condition that experiments on DukeMTMC-reID dataset should be removed, since the dataset has been taken down and should no longer be used.
[2020-07-01] We did the code refactoring to support distributed training, stronger performances and more features. Please see OpenUnReID.
git clone https://github.com/yxgeee/SpCL.git cd SpCL python setup.py develop
cd examples && mkdir data
Download the person datasets Market-1501, MSMT17, PersonX, and the vehicle datasets VehicleID, VeRi-776, VehicleX. Then unzip them under the directory like
SpCL/examples/data ├── market1501 │ └── Market-1501-v15.09.15 ├── msmt17 │ └── MSMT17_V1 ├── personx │ └── PersonX ├── vehicleid │ └── VehicleID -> VehicleID_V1.0 ├── vehiclex │ └── AIC20_ReID_Simulation -> AIC20_track2/AIC20_ReID_Simulation └── veri └── VeRi -> VeRi_with_plate
When training with the backbone of IBN-ResNet, you need to download the ImageNet-pretrained model from this link and save it under the path of
logs/pretrained/.
shell mkdir logs && cd logs mkdir pretrainedThe file tree should be
SpCL/logs └── pretrained └── resnet50_ibn_a.pth.tarImageNet-pretrained models for ResNet-50 will be automatically downloaded in the python script.
We utilize 4 GTX-1080TI GPUs for training. Note that
SpCLis end-to-end, which means that no source-domain pre-training is required.
--iters 400(default) for Market-1501 and PersonX datasets, and
--iters 800for MSMT17, VeRi-776, VehicleID and VehicleX datasets;
--width 128 --height 256(default) for person datasets, and
--height 224 --width 224for vehicle datasets;
-a resnet50(default) for the backbone of ResNet-50, and
-a resnet_ibn50afor the backbone of IBN-ResNet.
To train the model(s) in the paper, run this command:
shell CUDA_VISIBLE_DEVICES=0,1,2,3 \ python examples/spcl_train_uda.py \ -ds $SOURCE_DATASET -dt $TARGET_DATASET --logs-dir $PATH_OF_LOGS
Some examples: ```shell
CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainuda.py \ -ds personx -dt market1501 --logs-dir logs/spcluda/personx2marketresnet50
CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainuda.py --iters 800 \ -ds market1501 -dt msmt17 --logs-dir logs/spcluda/market2msmtresnet50
CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainuda.py --iters 800 --height 224 --width 224 \ -ds vehicleid -dt veri --logs-dir logs/spcluda/vehicleid2veriresnet50 ```
To train the model(s) in the paper, run this command:
shell CUDA_VISIBLE_DEVICES=0,1,2,3 \ python examples/spcl_train_usl.py \ -d $DATASET --logs-dir $PATH_OF_LOGS
Some examples: ```shell
CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainusl.py \ -d market1501 --logs-dir logs/spclusl/marketresnet50
CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainusl.py --iters 800 \ -d msmt17 --logs-dir logs/spclusl/msmtresnet50
CUDAVISIBLEDEVICES=0,1,2,3 \ python examples/spcltrainusl.py --iters 800 --height 224 --width 224 \ -d veri --logs-dir logs/spclusl/veriresnet50 ```
We utilize 1 GTX-1080TI GPU for testing. Note that
--width 128 --height 256(default) for person datasets, and
--height 224 --width 224for vehicle datasets;
--dsbnfor domain adaptive models, and add
--test-sourceif you want to test on the source domain;
-a resnet50(default) for the backbone of ResNet-50, and
-a resnet_ibn50afor the backbone of IBN-ResNet.
To evaluate the domain adaptive model on the target-domain dataset, run:
shell CUDA_VISIBLE_DEVICES=0 \ python examples/test.py --dsbn \ -d $DATASET --resume $PATH_OF_MODEL
To evaluate the domain adaptive model on the source-domain dataset, run:
shell CUDA_VISIBLE_DEVICES=0 \ python examples/test.py --dsbn --test-source \ -d $DATASET --resume $PATH_OF_MODEL
Some examples: ```shell
CUDAVISIBLEDEVICES=0 \ python examples/test.py --dsbn \ -d msmt17 --resume logs/spcluda/market2msmtresnet50/model_best.pth.tar
CUDAVISIBLEDEVICES=0 \ python examples/test.py --dsbn --test-source \ -d market1501 --resume logs/spcluda/market2msmtresnet50/model_best.pth.tar ```
To evaluate the model, run:
shell CUDA_VISIBLE_DEVICES=0 \ python examples/test.py \ -d $DATASET --resume $PATH
Some examples: ```shell
CUDAVISIBLEDEVICES=0 \ python examples/test.py \ -d market1501 --resume logs/spclusl/marketresnet50/model_best.pth.tar ```
You can download the above models in the paper from [Google Drive] or [Baidu Yun](password: w3l9).
If you find this code useful for your research, please cite our paper
@inproceedings{ge2020selfpaced, title={Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID}, author={Yixiao Ge and Feng Zhu and Dapeng Chen and Rui Zhao and Hongsheng Li}, booktitle={Advances in Neural Information Processing Systems}, year={2020} }