DFNet

by hughplay

hughplay / DFNet

Deep Fusion Network for Image Completion - ACMMM 2019

142 Stars 30 Forks Last release: Not found Other 9 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Deep Fusion Network for Image completion

Introduction

Deep image completion usually fails to harmonically blend the restored image into existing content, especially in the boundary area. And it often fails to complete complex structures.

We first introduce Fusion Block for generating a flexible alpha composition map to combine known and unknown regions. It builds a bridge for structural and texture information, so that information in known region can be naturally propagated into completion area. With this technology, the completion results will have smooth transition near the boundary of completion area.

Furthermore, the architecture of fusion block enable us to apply multi-scale constraints. Multi-scale constrains improves the performance of DFNet a lot on structure consistency.

Moreover, it is easy to apply this fusion block and multi-scale constrains to other existing deep image completion models. A fusion block feed with feature maps and input image, will give you a completion result in the same resolution as given feature maps.

More detail can be found in our paper

The illustration of a fusion block:

Examples of corresponding images:

If you find this code useful for your research, please cite:

@inproceedings{Hong:2019:DFN:3343031.3351002,
 author = {Hong, Xin and Xiong, Pengfei and Ji, Renhe and Fan, Haoqiang},
 title = {Deep Fusion Network for Image Completion},
 booktitle = {Proceedings of the 27th ACM International Conference on Multimedia},
 series = {MM '19},
 year = {2019},
 isbn = {978-1-4503-6889-6},
 location = {Nice, France},
 pages = {2033--2042},
 numpages = {10},
 url = {http://doi.acm.org/10.1145/3343031.3351002},
 doi = {10.1145/3343031.3351002},
 acmid = {3351002},
 publisher = {ACM},
 address = {New York, NY, USA},
 keywords = {alpha composition, deep fusion network, fusion block, image completion, inpainting},
} 

Prerequisites

  • Python 3
  • PyTorch 1.0
  • OpenCV

Testing

Clone this repo:

git clone https://github.com/hughplay/DFNet.git
cd DFNet

Download pre-trained model from Google Drive and put them into

model
.

Testing with Places2 model

There are already some sample images in the

samples/places2
folder.
python test.py --model model/model_places2.pth --img samples/places2/img --mask samples/places2/mask --output output/places2 --merge

Testing with CelebA model

There are already some sample images in the

samples/celeba
folder.
python test.py --model model/model_celeba.pth --img samples/celeba/img --mask samples/celeba/mask --output output/celeba --merge

Training

Currently we don't provide training code. If you want to train this model on your own dataset, there are some training settings in

config.yaml
may be useful. And the loss functions which defined in
loss.py
is available.

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.