Need help with context_encoder_pytorch?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

BoyuanJiang
243 Stars 75 Forks MIT License 3 Commits 13 Opened issues

Description

PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Services available

!
?

Need anything else?

Context Encoders: Feature Learning by Inpainting

This is the Pytorch implement of CVPR 2016 paper on Context Encoders

corrupted result

1) Semantic Inpainting Demo

  1. Install PyTorch http://pytorch.org/

  2. Clone the repository

    Shell
    git clone https://github.com/BoyuanJiang/context_encoder_pytorch.git
    
  3. Demo

    Download pre-trained model on Paris Streetview from Google Drive OR BaiduNetdisk ```Shell cp netGstreetview.pth contextencoderpytorch/model/ cd contextencoder_pytorch/model/

    Inpainting a batch iamges

    python test.py --netG model/netG_streetview.pth --dataroot dataset/val --batchSize 100

    Inpainting one image

    python testone.py --netG model/netGstreetview.pth --testimage result/test/cropped/065im.png ```

2) Train on your own dataset

  1. Build dataset

    Put your images under dataset/train,all images should under subdirectory

    dataset/train/subdirectory1/some_images

    dataset/train/subdirectory2/some_images

    ...

    Note:For Google Policy,Paris StreetView Dataset is not public data,for research using please contact with pathak22. You can also use The Paris Dataset to train your model

  2. Train

    Shell
    python train.py --cuda --wtl2 0.999 --niter 200
    
  3. Test

    This step is similar to Semantic Inpainting Demo

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.