Generate image analogies using neural matching and blending.
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
This is basically an implementation of this "Image Analogies" paper, In our case, we use feature maps from VGG16. The patch matching and blending is inspired by the method described in "Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis". Effects similar to that paper can be achieved by turning off the analogy loss (or leave it on!)
--analogy-w=0and turning on the B/B' content weighting via the
--b-content-wparameter. Also, instead of using brute-force patch matching we use the PatchMatch algorithm to approximate the best patch matches. Brute-force matching can be re-enabled by setting
The initial code was adapted from the Keras "neural style transfer" example.
The example arch images are from the "Image Analogies" website. They have some other good examples from their own implementation which are worth a look. Their paper discusses the various applications of image analogies so you might want to take a look for inspiration.
This requires either TensorFlow or Theano. If you don't have a GPU you'll want to use TensorFlow. GPU users may find to Theano to be faster at the expense of longer startup times. Here's the Theano GPU guide.
Here's how to configure the backend with Keras and set your default device (e.g. cpu, gpu0).
To install via virtualenv run the following commands.
virtualenv venv source venv/bin/activate pip install neural-image-analogies
If you have trouble with the above method, follow these directions to Install latest keras and theano or TensorFlow
make_image_analogy.pyshould now be on your path.
Before running this script, download the weights for the VGG16 model. This file contains only the convolutional layers of VGG16 which is 10% of the full size. Original source of full weights. The script assumes the weights are in the current working directory. If you place them somewhere else make sure to pass the
--vgg-weights=parameter or set the
Example script usage:
make_image_analogy.py image-A image-A-prime image-B prefix_for_output
make_image_analogy.py images/arch-mask.jpg images/arch.jpg images/arch-newmask.jpg out/arch
The examples directory has a script,
render_example.shwhich accepts an example name prefix and, optionally the location of your vgg weights.
./render_example.sh arch /path/to/your/weights.h5
Currently, A and A' must be the same size, the same holds for B and B'. Output size is the same as Image B, unless specified otherwise.
If you're not using a GPU, use TensorFlow. My Macbook Pro with with can render a 512x512 image in approximately 12 minutes using TensorFlow and --mrf-w=0. Here are some other options which mostly trade quality for speed.
OMP_NUM_THREADS=. You can read more about multi-core support here.
--mrf-w=0to skip optimization of local coherence
--analogy-layers=conv4_1(or other layers) which will consider half as many feature layers.
--model=brutewhich needs a powerful GPU
The default settings are somewhat lowered to give the average user a better chance at generating something on whatever computer they may have. If you have a powerful GPU then here are some options for nicer output: *
--model=brutewill turn on brute-force patch-matching and will be done on GPU. This is Theano-only (default=patchmatch) *
--patch-size=3this will allow for much nicer-looking details (default=1) *
--mrf-layers=conv1_1,conv2_1,...add more layers to the mix (also
The MRF loss (or "local coherence") is the influence of B' -> A' -> B'. In the parlance of style transfer, this is the style loss which gives texture to the image.
The B/B' content loss is set to 0.0 by default. You can get effects similar to CNNMRF by turning this up and setting analogy weight to zero. Or leave the analogy loss on for some extra style guidance.
If you'd like to only visualize the analogy target to see what's happening, set the MRF and content loss to zero:
--mrf-w=0 --content-w=0This is also much faster as MRF loss is the slowest part of the algorithm.
The code for this implementation is provided under the MIT license.
The suggested VGG16 weights are originally from here and are licensed http://creativecommons.org/licenses/by-nc/4.0/ Open a ticket if you have a suggestion for a more free-as-in-free-speech license.
The attributions for the example art can be found in