Need help with Contrastive-Adaptation-Network-for-Unsupervised-Domain-Adaptation?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

200 Stars 37 Forks Apache License 2.0 12 Commits 15 Opened issues


pytorch implementation for Contrastive Adaptation Network

Services available


Need anything else?

Contributors list

No Data

Contrastive Adaptation Network

Update: 2020-10-17: We have extended our method to the multi-source domain adaptation scenario. Please refer to our TPAMI paper Contrastive Adaptation Network for Single- and Multi-Source Domain Adaptation for more details. We will release our code for the multi-source domain adaptation soon.

2019-11: This is the Pytorch implementation for our CVPR 2019 paper Contrastive Adaptation Network for Unsupervised Domain Adaptation. As we reorganized our code based on a new pytorch version, some hyper-parameters are slightly different from the paper.


  • Python 3.7
  • Pytorch 1.1
  • PyYAML 5.1.1


The structure of the dataset should be like

|_ category.txt
|_ amazon
|  |_ back_pack
|     |_ .jpg
|     |_ ...
|     |_ .jpg
|  |_ bike
|     |_ .jpg
|     |_ ...
|     |_ .jpg
|  |_ ...
|_ dslr
|  |_ back_pack
|     |_ .jpg
|     |_ ...
|     |_ .jpg
|  |_ bike
|     |_ .jpg
|     |_ ...
|     |_ .jpg
|  |_ ...
|_ ...

The "category.txt" contains the names of all the categories, which is like



./experiments/scripts/ ${config_yaml} ${gpu_ids} ${adaptation_method} ${experiment_name}

For example, for the Office-31 dataset,

./experiments/scripts/ ./experiments/config/Office-31/CAN/office31_train_amazon2dslr_cfg.yaml 0 CAN office31_a2d
for the VisDA-2017 dataset,
./experiments/scripts/ ./experiments/config/VisDA-2017/CAN/visda17_train_train2val_cfg.yaml 0 CAN visda17_train2val

The experiment log file and the saved checkpoints will be stored at ./experiments/ckpt/${experiment_name}


./experiments/scripts/ ${config_yaml} 0 ${if_adapted} ${experiment_name}


./experiments/scripts/ ./experiments/config/Office-31/office31_test_amazon_cfg.yaml 0 True visda17_test


Please cite our paper if you use our code in your research: ``` @article{kangcontrastive, title={Contrastive Adaptation Network for Single-and Multi-Source Domain Adaptation}, author={Kang, Guoliang and Jiang, Lu and Wei, Yunchao and Yang, Yi and Hauptmann, Alexander G}, journal={IEEE transactions on pattern analysis and machine intelligence}, year={2020} }

@inproceedings{kang2019contrastive, title={Contrastive Adaptation Network for Unsupervised Domain Adaptation}, author={Kang, Guoliang and Jiang, Lu and Yang, Yi and Hauptmann, Alexander G}, booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition}, pages={4893--4902}, year={2019} } ```


If you have any questions, please contact me via [email protected]

Thanks to third party

The way of setting configurations is inspired by

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.