Seeing Motion in the Dark. ICCV19
Required python (version 2.7) libraries: Tensorflow (1.8.0) + Scipy + Numpy + Rawpy + OpenCV (4.1.0).
Tested in Ubuntu 16.04 + Nvidia Tesla V100 32 GB with Cuda (>=9.0) and CuDNN (>=7.1). CPU mode should also work with minor changes but not tested.
./DRV. To try the pre-trained model, download the model in
To retrain a new model, run:
python download_VGG_models.py python train.py
To generate the 5th frame of each video, run
To generate the videos, run
By default, the code takes the data in the
./DRV/and the output folder is
If you use our code and dataset for research, please cite our paper:
Chen Chen, Qifeng Chen, Minh N. Do, and Vladlen Koltun, "Seeing Motion in the Dark", in ICCV, 2019.
The proposed method is designed for sensor raw data. The pretrained model probably not work for data from another camera sensor. We do not have support for other camera data. It also does not work for images after camera ISP, i.e., the JPG or PNG data.
This is a research project and a prototype to prove a concept.
Generally, you will need to pre-process your data in a similar way. That is black level subtraction, packing, applying target gain and run some pre-defined temporal filters. The test data should be pre-processed in the same way.
We provided a
pretrain_on_small.pyfor small memory GPUs. After the training on small resolution, you will need to finetune it on CPU using the
train.py(modify the epoch and learning rate to make it continue training).
If you have addtional questions after reading the FAQ, please email to [email protected]