Need help with XingGAN?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

Ha0Tang
197 Stars 33 Forks Other 66 Commits 2 Opened issues

Description

[ECCV 2020] XingGAN for Person Image Generation

Services available

!
?

Need anything else?

Contributors list

# 32,030
pytorch
MATLAB
cyclega...
matting
19 commits

License CC BY-NC-SA 4.0 Python 3.6 Packagist Last Commit Maintenance Contributing Ask Me Anything !

Contents

XingGAN or CrossingGAN

| Project | Paper |
XingGAN for Person Image Generation
Hao Tang12, Song Bai2, Li Zhang2, Philip H.S. Torr2, Nicu Sebe13.
1University of Trento, Italy, 2University of Oxford, UK, 3Huawei Research Ireland, Ireland.
In ECCV 2020.
The repository offers the official implementation of our paper in PyTorch.

In the meantime, check out our related ACM MM 2019 paper Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation, BMVC 2020 oral paper Bipartite Graph Reasoning GANs for Person Image Generation, and ICCV 2021 paper Intrinsic-Extrinsic Preserved GANs for Unsupervised 3D Pose Transfer.

Framework

Comparison Results


License

Creative Commons License
Copyright (C) 2020 University of Trento, Italy.

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)

The code is released for academic research use only. For commercial use, please contact [email protected].

Installation

Clone this repo.

bash
git clone https://github.com/Ha0Tang/XingGAN
cd XingGAN/

This code requires PyTorch 1.0.0 and python 3.6.9+. Please install the following dependencies: * pytorch 1.0.0 * torchvision * numpy * scipy * scikit-image * pillow * pandas * tqdm * dominate

To reproduce the results reported in the paper, you need to run experiments on NVIDIA DGX1 with 4 32GB V100 GPUs for DeepFashion, and 1 32GB V100 GPU for Market-1501.

Dataset Preparation

Please follow SelectionGAN to directly download both Market-1501 and DeepFashion datasets.

This repository uses the same dataset format as SelectionGAN and BiGraphGAN. so you can use the same data for all these methods.

Generating Images Using Pretrained Model

Market-1501

sh scripts/download_xinggan_model.sh market

Then, 1. Change several parameters in

test_market.sh
. 2. Run
sh test_market.sh
for testing.

DeepFashion

sh scripts/download_xinggan_model.sh deepfashion

Then, 1. Change several parameters in

test_deepfashion.sh
. 2. Run
sh test_deepfashion.sh
for testing.

Train and Test New Models

Market-1501

  1. Change several parameters in
    train_market.sh
    .
  2. Run
    sh train_market.sh
    for training.
  3. Change several parameters in
    test_market.sh
    .
  4. Run
    sh test_market.sh
    for testing.

DeepFashion

  1. Change several parameters in
    train_deepfashion.sh
    .
  2. Run
    sh train_deepfashion.sh
    for training.
  3. Change several parameters in
    test_deepfashion.sh
    .
  4. Run
    sh test_deepfashion.sh
    for testing.

Evaluation

We adopt SSIM, mask-SSIM, IS, mask-IS, and PCKh for evaluation of Market-1501. SSIM, IS, PCKh for DeepFashion. 1. SSIM, mask-SSIM, IS, mask-IS: install

python3.5
,
tensorflow 1.4.1
, and
scikit-image==0.14.2
. Then run,
python tool/getMetrics_market.py
or
python tool/getMetrics_fashion.py
.
  1. PCKh: install
    python2
    , and
    pip install tensorflow==1.4.0
    , then set
    export KERAS_BACKEND=tensorflow
    . After that, run
    python tool/crop_market.py
    or
    python tool/crop_fashion.py
    . Next, download pose estimator and put it under the root folder, and run
    python compute_coordinates.py
    . Lastly, run
    python tool/calPCKH_market.py
    or
    python tool/calPCKH_fashion.py
    .

Please refer to Pose-Transfer for more details.

Acknowledgments

This source code is inspired by both Pose-Transfer and SelectionGAN.

Related Projects

BiGraphGAN | GestureGAN | C2GAN | SelectionGAN | Guided-I2I-Translation-Papers

Citation

If you use this code for your research, please consider giving a star :star: and citing our paper :t-rex::

XingGAN

@inproceedings{tang2020xinggan,
  title={XingGAN for Person Image Generation},
  author={Tang, Hao and Bai, Song and Zhang, Li and Torr, Philip HS and Sebe, Nicu},
  booktitle={ECCV},
  year={2020}
}

If you use the original BiGraphGAN, GestureGAN, C2GAN, and SelectionGAN model, please consider giving stars :star: and citing the following papers :t-rex::

BiGraphGAN

@inproceedings{tang2020bipartite,
  title={Bipartite Graph Reasoning GANs for Person Image Generation},
  author={Tang, Hao and Bai, Song and Torr, Philip HS and Sebe, Nicu},
  booktitle={BMVC},
  year={2020}
}

GestureGAN ``` @article{tang2019unified, title={Unified Generative Adversarial Networks for Controllable Image-to-Image Translation}, author={Tang, Hao and Liu, Hong and Sebe, Nicu}, journal={IEEE Transactions on Image Processing (TIP)}, year={2020} }

@inproceedings{tang2018gesturegan, title={GestureGAN for Hand Gesture-to-Gesture Translation in the Wild}, author={Tang, Hao and Wang, Wei and Xu, Dan and Yan, Yan and Sebe, Nicu}, booktitle={ACM MM}, year={2018} } ```

C2GAN ``` @article{tang2021total, title={Total Generate: Cycle in Cycle Generative Adversarial Networks for Generating Human Faces, Hands, Bodies, and Natural Scenes}, author={Tang, Hao and Sebe, Nicu}, journal={IEEE Transactions on Multimedia (TMM)}, year={2021} }

@inproceedings{tang2019cycleincycle, title={Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation}, author={Tang, Hao and Xu, Dan and Liu, Gaowen and Wang, Wei and Sebe, Nicu and Yan, Yan}, booktitle={ACM MM}, year={2019} } ```

SelectionGAN ``` @inproceedings{tang2019multi, title={Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation}, author={Tang, Hao and Xu, Dan and Sebe, Nicu and Wang, Yanzhi and Corso, Jason J and Yan, Yan}, booktitle={CVPR}, year={2019} }

@article{tang2020multi, title={Multi-channel attention selection gans for guided image-to-image translation}, author={Tang, Hao and Xu, Dan and Yan, Yan and Corso, Jason J and Torr, Philip HS and Sebe, Nicu}, journal={arXiv preprint arXiv:2002.01048}, year={2020} } ```

Contributions

If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Hao Tang ([email protected]).

Collaborations

I'm always interested in meeting new people and hearing about potential collaborations. If you'd like to work together or get in contact with me, please email [email protected] Some of our projects are listed here.


Progress is impossible without change, and those who cannot change their minds cannot change anything.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.