by zsdonghao

zsdonghao / text-to-image

Generative Adversarial Text to Image Synthesis / Please Star -->

474 Stars 148 Forks Last release: over 3 years ago (0.2) 77 Commits 2 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Text To Image Synthesis

This is an experimental tensorflow implementation of synthesizing images. The images are synthesized using the GAN-CLS Algorithm from the paper Generative Adversarial Text-to-Image Synthesis. This implementation is built on top of the excellent DCGAN in Tensorflow.

Model architecture

Image Source : Generative Adversarial Text-to-Image Synthesis Paper



  • The model is currently trained on the flowers dataset. Download the images from here and save them in
    . Also download the captions from this link. Extract the archive, copy the
    folder and paste it in

N.B You can downloads all data files needed manually or simply run the downloads.py and put the correct files to the right directories.

python downloads.py


  • downloads.py
    download Oxford-102 flower dataset and caption files(run this first).
  • data_loader.py
    load data for further processing.
  • train_txt2im.py
    train a text to image model.
  • utils.py
    helper functions.
  • model.py



  • the flower shown has yellow anther red pistil and bright red petals.
  • this flower has petals that are yellow, white and purple and has dark lines
  • the petals on this flower are white with a yellow center
  • this flower has a lot of small round pink petals.
  • this flower is orange in color, and has petals that are ruffled and rounded.
  • the flower has yellow petals and the center of it is brown
  • this flower has petals that are blue and white.
  • these white flowers have petals that start off white in color and end in a white towards the tips.


Apache 2.0

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.