Need help with deep-video-prior?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

ChenyangLEI
225 Stars 31 Forks 35 Commits 1 Opened issues

Description

Code for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior

Services available

!
?

Need anything else?

Contributors list

deep-video-prior (DVP)

Code for NeurIPS 2020 paper: Blind Video Temporal Consistency via Deep Video Prior

PyTorch implementation | paper | project website

Introduction

Our method is a general framework to improve the temporal consistency of video processed by image algorithms. For example, combining image colorization or image dehazing algorithm with our framework, we can achieve the goal of video colorization or video dehazing.

Dependencey

Environment

This code is based on tensorflow. It has been tested on Ubuntu 18.04 LTS.

Anaconda is recommended: Ubuntu 18.04 | Ubuntu 16.04

After installing Anaconda, you can setup the environment simply by

conda env create -f environment.yml
conda activate deep-video-prior

Download VGG model

cd deep-video-prior
python download_VGG.py
unzip VGG_Model.zip

Inference

Demo

bash test.sh

The results are placed in ./result

Use your own data

For the video with unimodal inconsistency:

python dvp_video_consistency.py --input PATH_TO_YOUR_INPUT_FOLDER --processed PATH_TO_YOUR_PROCESSED_FOLDER --task NAME_OF_YOUR_MODEL  --output ./result/OWN_DATA

For the video with multimodal inconsistency:

python dvp_video_consistency.py --input PATH_TO_YOUR_INPUT_FOLDER --processed PATH_TO_YOUR_PROCESSED_FOLDER --task NAME_OF_YOUR_MODEL --with_IRT 1 --IRT_initialization 1 --output ./result/OWN_DATA

Other information

  -h, --help            show this help message and exit
  --task TASK           Name of task
  --input INPUT         Dir of input video
  --processed PROCESSED
                        Dir of processed video
  --output OUTPUT       Dir of output video
  --use_gpu USE_GPU     Use gpu or not
  --loss {perceptual,l1,l2}
                        Chooses which loss to use. perceptual, l1, l2
  --network {unet}      Chooses which model to use. unet, fcn
  --coarse_to_fine_speedup COARSE_TO_FINE_SPEEDUP
                        Use coarse_to_fine_speedup for training
  --with_IRT WITH_IRT   Sse IRT or not, set this to 1 if you want to solve
                        multimodal inconsistency
  --IRT_initialization IRT_INITIALIZATION
                        Sse initialization for IRT
  --large_video LARGE_VIDEO
                        Set this to 1 when the number of video frames are
                        large, e.g., more than 1000 frames
  --save_freq SAVE_FREQ
                        Save frequency of epochs
  --max_epoch MAX_EPOCH
                        The max number of epochs for training
  --format FORMAT       Format of output image

Citation

If you find this work useful for your research, please cite:

@inproceedings{lei2020dvp,
  title={Blind Video Temporal Consistency via Deep Video Prior},
  author={Lei, Chenyang and Xing, Yazhou and Chen, Qifeng},
  booktitle={Advances in Neural Information Processing Systems},
  year={2020}
}                

Contact

Please contact me if there is any question (Chenyang Lei, [email protected])

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.