GatedConvolution_pytorch

by avalonstrel

A modified reimplemented in pytorch of inpainting model in Free-Form Image Inpainting with Gated Co...

237 Stars 50 Forks Last release: Not found 9 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

GatedConvolution_pytorch

A modified reimplemented in pytorch of inpainting model in Free-Form Image Inpainting with Gated Convolution [http://jiahuiyu.com/deepfill2/] This repo is transfered from the https://github.com/avalonstrel/GatedConvolution and https://github.com/JiahuiYu/generative_inpainting.

It is a model for image inpainting task. I implement the network structure and gated convolution in Free-Form Image Inpainting with Gated Convolution, but a little difference about the original structure described in Free-Form Image Inpainting with Gated Convolution.

  • In refine network, I do not employ the contextual attention but a self-attention layer instead.
  • I add batch norm to each layer.

Some results

BenchMark data and Mask data can be found in Google Drive Result

How to test images by pre-trained model?

I provide a pre-trained Baidu, Google model on Places2 256x256 dataset, (but unfortunately only the coarse network can be loaded since I change the network structure after the pre-train process, in fact the coarse network also work).

Run

bash scripts/test_inpaint.sh

You should provide a file containing file paths you want to test following the form of

test1.png

test2.png

... ...

Change the parameters in config/testplaces2sagan.yml About the image

places2:

[

'flist_file_for_train', 'flist_file_for_test'

]

About the mask

val:

[

'mask_flist_file_for_train',

'mask_flist_file_for_test'

]

The mask file should be a pkl file containing a numpy.array.

The MODELRESTORE should be set to the path of the pre-trained model. After successfully running, you can find the results in resultlogs/MODEL_RESTORE

How to train your own model?

To train your own model with some other dataset you can

Run

bash scripts/run_inpaint_sa.sh

By providing the

places2:

[

'flist_file_for_train', 'flist_file_for_test'

]

About the mask

val:

[

'mask_flist_file_for_train',

'mask_flist_file_for_test'

]

And in training you can use random free-form mask or random rectangular mask. I use random free-form mask. If you want use random rectangular mask you need to change the process in trainsagan.py(line 163) and set MASKTYPES: ['random_bbox'].

Some detials about the training parameters is easy to understand as shown in config file.

Tensorboard

Run

tensorboard --logdir model_logs --port 6006
to view training progress.

Some tips about mask generation?

We provide two random mask generation function. * random free form masks

The parameters about this function are

RANDOM_FF_SETTING:

img_shape: [256,256]

mv: 5

ma: 4.0

ml: 40

mbw: 10

Following the meaning in http://jiahuiyu.com/deepfill2/.

  • random rectangular masks

    RANDOMBBOXSHAPE: [32, 32]

    RANDOMBBOXMARGIN: [64, 64]

    means the shape of the random bbox and the margin between the boarder. (The number of rectangulars can be set in inpaintdataset.py randombbox_number=5)

Acknowledgments

My project acknowledge the official code DeepFillv1 and SNGAN. Especially, thanks for the authors of this amazing algorithm.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.