If you use this code for your research, please cite:
bash git clone https://github.com/pytorch/vision cd vision python setup.py install
bash pip install visdom pip install dominate
bash git clone https://github.com/AAnoosheh/ComboGAN.git cd ComboGAN
Our ready datasets can be downloaded using
A pretrained model for the 14-painters dataset can be found HERE. Place under
./checkpoints/and test using the instructions below, with args
--name paint14_pretrained --dataroot ./datasets/painters_14 --n_domains 14 --which_epoch 1150.
Example running scripts can be found in the
python train.py --name --dataroot ./datasets/ --n_domains --niter --niter_decayCheckpoints will be saved by default to
python train.py --continue_train --which_epoch --name --dataroot ./datasets/ --n_domains --niter --niter_decay
python test.py --phase test --name --dataroot ./datasets/ --n_domains --which_epoch --serial_testThe test results will be saved to a html file here:
options/train_options.pyfor training-specific flags; see
options/test_options.pyfor test-specific flags; and see
options/base_options.pyfor all common flags.
--dataroot) should contain subfolders of the form
test*/, and they are loaded in alphabetical order. (Note that a folder named train10 would be loaded before train2, and thus all checkpoints and results would be ordered accordingly.)
--gpu_ids 0): set
--gpu_ids -1to use CPU mode; set
--gpu_ids 0,1,2for multi-GPU mode. You need a large batch size (e.g.
--batchSize 32) to benefit from multiple GPUs.
--display_id> 0, the results and loss plot will appear on a local graphics web server launched by visdom. To do this, you should have
visdominstalled and a server running by the command
python -m visdom.server. The default server URL is
display_idcorresponds to the window ID that is displayed on the
visdomdisplay functionality is turned on by default. To avoid the extra overhead of communicating with
--display_id 0. Secondly, the intermediate results are also saved to
./checkpoints//web/index.html. To avoid this, set the
--resize_or_cropoption. The default option
'resize_and_crop'resizes the image to be of size
(opt.loadSize, opt.loadSize)and does a random crop of size
'crop'skips the resizing step and only performs random cropping.
'scale_width'resizes the image to have width
opt.fineSizewhile keeping the aspect ratio.
'scale_width_and_crop'first resizes the image to have width
opt.loadSizeand then does random cropping of size
NOTE: one should not expect ComboGAN to work on just any combination of input and output datasets (e.g.
dogshouses). We find it works better if two datasets share similar visual content. For example,
landscape paintinglandscape photographsworks much better than
portrait painting landscape photographs.