Skip to content
/ EVDI Public

Implementation of CVPR'22 paper "Unifying Motion Deblurring and Frame Interpolation with Events"

Notifications You must be signed in to change notification settings

XiangZ-0/EVDI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 

Repository files navigation

EVDI - Unifying Motion Deblurring and Frame Interpolation with Events (Academic Use Only)

Slow shutter speed and long exposure time of frame-based cameras often cause visual blur and loss of inter-frame information, degenerating the overall quality of captured videos. To this end, we present a unified framework of event-based motion deblurring and frame interpolation for blurry video enhancement, where the extremely low latency of events is leveraged to alleviate motion blur and facilitate intermediate frame prediction. Specifically, the mapping relation between blurry frames and sharp latent images is first predicted by a learnable double integral network, and a fusion network is then proposed to refine the coarse results via utilizing the information from consecutive blurry inputs and the concurrent events. By exploring the mutual constraints among blurry frames, latent images, and event streams, we further propose a self-supervised learning framework to enable network training with real-world blurry videos and events.

Demo

10X (middle) and 100X (right) frame-rate results from one EVDI model.

[News]: Our work on self-supervised deblurring performance generalization is accepted by ICCV 2023 🎉, welcome to check and star GEM if it interests you! 😆

Environment setup

  • Python 3.7
  • Pytorch 1.4.0
  • opencv-python 3.4.2
  • NVIDIA GPU + CUDA
  • numpy, argparse

You can create a new Anaconda environment as follows.

conda create -n evdi python=3.7
conda activate evdi

Clone this repository.

git clone git@github.com:XiangZ-0/EVDI.git

Install the above dependencies.

cd EVDI
pip install -r requirements.txt

Download model and data

Pretrained models and some example data can be downloaded via Google Drive.
In our paper, we conduct experiments on three types of data:

  • GoPro contains synthetic blurry images and synthetic events. We first convert REDS into high frame rate videos using RIFE, and then obtain blurry images by averaging sharp frames and generate events by ESIM.
  • HQF contains synthetic blurry images and real-world events from HQF, where blurry images are generated using the same manner as GoPro.
  • RBE contains real-world blurry images and real-world events from RBE.

Quick start

Initialization

  • Change the parent directory to './codes/'
cd codes
  • Copy the pretrained model to directory './PreTrained/'
  • Copy the example data to directory './Database/'

Test

  • Test on GoPro data
python Test.py --test_ts=0.5 --model_path=./PreTrained/EVDI-GoPro.pth --test_path=./Database/GoPro/ --save_path=./Result/EVDI-GoPro/ 
  • Test on HQF data
python Test.py --test_ts=0.5 --model_path=./PreTrained/EVDI-HQF.pth --test_path=./Database/HQF/ --save_path=./Result/EVDI-HQF/ 
  • Test on RBE data
python Test.py --test_ts=0.5 --model_path=./PreTrained/EVDI-RBE.pth --test_path=./Database/RBE/ --save_path=./Result/EVDI-RBE/
  • Test on GoPro-Color data
python Test.py --test_ts=0.5 --model_path=./PreTrained/EVDI-GoPro-Color.pth --test_path=./Database/GoPro-Color/ --save_path=./Result/EVDI-GoPro-Color/ --color_flag=1

Main Parameters:

  • --test_ts : reconstruction timestamp, normalized in [0,1].
  • --model_path : path of pretrained model.
  • --test_path : path of test dataset.
  • --save_path : path of reconstruction results.
  • --color_flag : use color model or gray model.

Train

If you want to train your own model, please prepare the blurry images and events in the following directory structure (an example data is provided in './Database/Raw/' for reference):

<project root>
  |-- Database
  |     |-- Raw
  |     |     |-- Events.txt
  |     |     |-- Exposure_start.txt
  |     |     |-- Exposure_end.txt
  |     |     |-- Blur
  |     |     |     |-- 000000.png
  |     |     |     |-- 000001.png
  |     |     |     |-- ...
  • Events.txt contains event data in (t,x,y,p) format with t in ns and p in {-1, 1}.
  • Exposure_start.txt contains the start timestamp of each blurry image in ns.
  • Exposure_end.txt contains the end timestamp of each blurry image in ns.

After arranging the raw data into the above structure, please pack them into training pairs by running

python Prepare_data.py --input_path=./Database/Raw/ --save_path=./Database/train/ --color_flag=0

Please set --color_flag=1 if you want to use color images. Finally, modify the parameters in 'Train.py' according to your need and run

python Train.py

Main Parameters:

  • --model_path : model save path.
  • --train_path : path of train datasets.
  • --num_epoch : number of epoch.
  • --loss_wei : weights for loss functions [blur-sharp, blur-event, sharp-event].
  • --num_frames : the number of reconstructions per input, related to 'M' in paper (recommended >= 25).
  • --bs : batch size.
  • --lr : initial learning rate.
  • --color_flag : use color or gray model.

Citation

If you find our work useful in your research, please cite:

@inproceedings{zhang2022unifying,
  title={Unifying Motion Deblurring and Frame Interpolation with Events},
  author={Zhang, Xiang and Yu, Lei},
  year={2022},
  booktitle={CVPR},
}

About

Implementation of CVPR'22 paper "Unifying Motion Deblurring and Frame Interpolation with Events"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages