Need help with OneShotTranslation?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

sagiebenaim
133 Stars 21 Forks Other 17 Commits 2 Opened issues

Description

Pytorch implementation of "One-Shot Unsupervised Cross Domain Translation" NIPS 2018

Services available

!
?

Need anything else?

Contributors list

Pytorch implementation of One-Shot Unsupervised Cross Domain Translation (arxiv).

Prerequisites

  • Python 3.6
  • Pytorch 0.4
  • Numpy/Scipy/Pandas
  • Progressbar
  • OpenCV
  • visdom
  • dominate

MNIST-to-SVHN and SVHN-to-MNIST

To train autoencoder for both MNIST and SVHN (In mnisttosvhn folder): python mainautoencoder.py --useaugmentation=True

To train OST for MNIST to SVHN: python mainmnisttosvhn.py --pretrainedg=True --savemodelsandsamples=True --useaugmentation=True --onewaycycle=True --freeze_shared=False

To train OST for SVHN to MNIST: python mainsvhntomnist.py --pretrainedg=True --savemodelsandsamples=True --useaugmentation=True --onewaycycle=True --freeze_shared=False

Drawing and Style Transfer Tasks

Download Dataset

To download dataset (in drawingandstyletransfer folder): bash datasets/downloadcyclegandataset.sh $DATASETNAME where DATASETNAME is one of (facades, cityscapes, maps, monet2photo, summer2winteryosemite)

Train Autoencoder

To train autoencoder for facades (in drawingandstyletransfer folder): python train.py --dataroot=./datasets/facades/trainB --name=facadesautoencoder --model=autoencoder --datasetmode=single --nodropout --ndownsampling=2 --numunshared=2

In the reverse direction (images of facades): python train.py --dataroot=./datasets/facades/trainA --name=facadesautoencoderreverse --model=autoencoder --datasetmode=single --nodropout --ndownsampling=2 --numunshared=2

Train OST

To train OST for images to facades: python train.py --dataroot=./datasets/facades/ --name=facadesost --loaddir=facadesautoencoder --model=ost --nodropout --ndownsampling=2 --numunshared=2 --start=0 --maxitemsA=1

To train OST for facades to images (reverse direction): python train.py --dataroot=./datasets/facades/ --name=facadesostreverse --loaddir=facadesautoencoderreverse --model=ost --nodropout --ndownsampling=2 --numunshared=2 --start=0 --maxitemsA=1 --A='B' --B='A'

To visualize losses: run python -m visdom.server

Test OST

To test OST for images to facades: python test.py --dataroot=./datasets/facades/ --name=facadesost --model=ost --nodropout --ndownsampling=2 --numunshared=2 --start=0 --maxitemsA=1

To test OST for facades to images (reverse direction): python test.py --dataroot=./datasets/facades/ --name=facadesostreverse --model=ost --nodropout --ndownsampling=2 --numunshared=2 --start=0 --maxitems_A=1 --A='B' --B='A'

Options

Additional scripts for other datasets are at ./drawingandstyle_transfer/scripts

Options are at ./drawingandstyle_transfer/options

Reference

If you found this code useful, please cite the following paper:

@inproceedings{Benaim2018OneShotUC,
  title={One-Shot Unsupervised Cross Domain Translation},
  author={Sagie Benaim and Lior Wolf},
  booktitle={NeurIPS},
  year={2018}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.