graphics Artificial Intelligence Sketch Computer vision Python
Need help with PhotoSketch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.


Code for Photo-Sketching: Inferring Contour Drawings from Images :dog:

251 Stars 56 Forks Other 8 Commits 1 Opened issues

Services available

Need anything else?

Photo-Sketching: Inferring Contour Drawings from Images


This repo contains the training & testing code for our sketch generator. We also provide a [pre-trained model].

For technical details and the dataset, please refer to the [paper] and the [project page].

Setting up

The code is now updated to use PyTorch 0.4 and runs on Windows, Mac and Linux. For the obsolete version with PyTorch 0.3 (Linux only), please check out the branch pytorch-0.3-obsolete.

Windows users should find the corresponding

files instead of
files mentioned below.

One-line installation (with Conda environments)

conda env create -f environment.yml

Then activate the environment (sketch) and you are ready to go!

See here for more information about conda environments.

Manual installation


for a list of dependencies.

Using the pre-trained model

  • Download the pre-trained model
  • Modify the path in
  • From the repo's root directory, run

It supports a folder of images as input.

Train & test on our contour drawing dataset

  • Download the images and the rendered sketch from the project page
  • Unzip and organize them into the following structure:

    File structure

  • Modify the path in

  • From the repo's root directory, run

    to train the model
  • From the repo's root directory, run

    to test on the val set or the test set (specified by the phase flag)


If you use the code or the data for your research, please cite the paper:

  title={Photo-Sketching: Inferring Contour Drawings from Images},
  author={Li, Mengtian and Lin, Zhe and M\v ech, Radom\'ir and and Yumer, Ersin and Ramanan, Deva},


This code is based on an old version of pix2pix.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.