Github url

deep-photo-styletransfer

by luanfujun

Code and data for paper "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511

9.6K Stars 1.4K Forks Last release: Not found 93 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

deep-photo-styletransfer

Code and data for paper "Deep Photo Style Transfer"

Disclaimer

This software is published for academic and non-commercial use only.

Setup

This code is based on torch. It has been tested on Ubuntu 14.04 LTS.

Dependencies: * Torch (with matio-ffi and loadcaffe) * Matlab or Octave

CUDA backend: * CUDA* cudnn

Download VGG-19:

sh models/download\_models.sh

Compile

cuda\_utils.cu

(Adjust

PREFIX

and

NVCC\_PREFIX

in

makefile

for your machine):

make clean && make

Usage

Quick start

To generate all results (in

examples/

) using the provided scripts, simply run

run('gen\_laplacian/gen\_laplacian.m')

in Matlab or Octave and then

python gen\_all.py

in Python. The final output will be in

examples/final\_results/

.

Basic usage

  1. Given input and style images with semantic segmentation masks, put them in
    examples/
    respectively. They will have the following filename form:
    examples/input/in<id>.png</id>
    ,
    examples/style/tar<id>.png</id>
    and
    examples/segmentation/in<id>.png</id>
    ,
    examples/segmentation/tar<id>.png</id>
    ;
  2. Compute the matting Laplacian matrix using
    gen\_laplacian/gen\_laplacian.m
    in Matlab. The output matrix will have the following filename form:
    gen\_laplacian/Input\_Laplacian\_3x3\_1e-7\_CSR<id>.mat</id>
    ;

Note: Please make sure that the content image resolution is consistent for Matting Laplacian computation in Matlab and style transfer in Torch, otherwise the result won't be correct.

  1. Run the following script to generate segmented intermediate result:
    th neuralstyle\_seg.lua -content\_image <input> -style\_image <style> -content_seg <inputMask> -style_seg <styleMask> -index <id> -serial <intermediate_folder>
    </pre></li>
    <li>Run the following script to generate final result:
    <pre class="">
    th deepmatting_seg.lua -content_image <input> -style_image <style> -content_seg <inputMask> -style_seg <styleMask> -index <id> -init_image <intermediate_folder/out<id>_t_1000.png> -serial <final_folder> -f_radius 15 -f_edge 0.01
    </pre></li>
    </ol>
    

You can pass

-backend cudnn
and
-cudnn_autotune
to both Lua scripts (step 3. and 4.) to potentially improve speed and memory usage.
libcudnn.so
must be in your
LD_LIBRARY_PATH
. This requires cudnn.torch.

Image segmentation

Note: In the main paper we generate all comparison results using automatic scene segmentation algorithm modified from DilatedNet. Manual segmentation enables more diverse tasks hence we provide the masks in

examples/segmentation/
.

The mask colors we used (you could add more colors in

ExtractMask
function in two
*.lua
files):

| Color variable | RGB Value | Hex Value | | ------------- | ------------- | ------------- | |

blue
|
0 0 255
|
0000ff
| |
green
|
0 255 0
|
00ff00
| |
black
|
0 0 0
|
000000
| |
white
|
255 255 255
|
ffffff
| |
red
|
255 0 0
|
ff0000
| |
yellow
|
255 255 0
|
ffff00
| |
grey
|
128 128 128
|
808080
| |
lightblue
|
0 255 255
|
00ffff
| |
purple
|
255 0 255
|
ff00ff
|

Here are some automatic and manual tools for creating a segmentation mask for a photo image:

Automatic:

Manual:

Examples

Here are some results from our algorithm (from left to right are input, style and our output):

Acknowledgement

  • Our torch implementation is based on Justin Johnson's code;
  • We use Anat Levin's Matlab code to compute the matting Laplacian matrix.

Citation

If you find this work useful for your research, please cite:

@article{luan2017deep,
  title={Deep Photo Style Transfer},
  author={Luan, Fujun and Paris, Sylvain and Shechtman, Eli and Bala, Kavita},
  journal={arXiv preprint arXiv:1703.07511},
  year={2017}
}

Contact

Feel free to contact me if there is any question (Fujun Luan [email protected]).

```

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.