Need help with GAN-MRI?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

simontomaskarlsson
181 Stars 62 Forks GNU General Public License v3.0 23 Commits 8 Opened issues

Description

Code repository for Frontiers article 'Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT'

Services available

!
?

Need anything else?

Contributors list

# 150,722
Python
image-t...
Keras
generat...
13 commits
# 295,451
Python
image-t...
Keras
generat...
2 commits

Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT

[Arxiv paper]

Code usage

  1. Prepare your dataset under the directory 'data' in the CycleGAN or UNIT folder and set dataset name to parameter 'image_folder' in model init function.

    • Directory structure on new dataset needed for training and testing:
    • data/Dataset-name/trainA
    • data/Dataset-name/trainB
    • data/Dataset-name/testA
    • data/Dataset-name/testB
  2. Train a model by:

    python CycleGAN.py
    
    or
    python UNIT.py
    
  3. Generate synthetic images by following specifications under:

    • CycleGAN/generate_images/ReadMe.md
    • UNIT/generate_images/ReadMe.md

Result GIFs - 304x256 pixel images

Left: Input image. Middle: Synthetic images generated during training. Right: Ground truth.
Histograms show pixel value distributions for synthetic images (blue) compared to ground truth (brown).
(An updated image normalization, present in the current version of this repo, has fixed the intensity error seen in these results.)

CycleGAN - T1 to T2

CycleGAN - T2 to T1

UNIT - T1 to T2

UNIT - T2 to T1

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.