Zhengqiang Zhang1,2 | Ruihuang Li 1,2 | Shi Guo1,2 | Yang Cao3 | Lei Zhang1,2
1The Hong Kong Polytechnic University, 2 The PolyU-OPPO Joint Innovation Lab, 3The Hong Kong University of Science and Technology
Online video super-resolution (online-VSR) highly relies on an effective alignment module to aggregate temporal information, while the strict latency requirement makes accurate and efficient alignment very challenging. Though much progress has been achieved, most of the existing online-VSR methods estimate the motion fields of each frame separately to perform alignment, which is computationally redundant and ignores the fact that the motion fields of adjacent frames are correlated. In this work, we propose an efficient Temporal Motion Propagation (TMP) method, which leverages the continuity of motion field to achieve fast pixel-level alignment among consecutive frames. Specifically, we first propagate the offsets from previous frames to the current frame, and then refine them in the neighborhood, which significantly reduces the matching space and speeds up the offset estimation process. Furthermore, to enhance the robustness of alignment, we perform spatial-wise weighting on the warped features, where the positions with more precise offsets are assigned higher importance. Experiments on benchmark datasets demonstrate that the proposed TMP method achieves leading online-VSR accuracy as well as inference speed.
The OBJ path aims to locate moving objects in the current frame, while the CAM path matches the static regions. The
Overview of our proposed online-VSR method. Left: The flowchart of the proposed method. There are two major differences between our method and the existing methods. One is the temporal motion propagation (TMP) module (highlighted in
-
We train and test our project under torch==1.10 and python3.7. You can install the required libs with
pip3 install requirement.txt
. -
Please refer to here to download the REDS, Vimeo90K dataset and there forVid4 dataset.
-
You can train this project using
python3 basicsr/train.py -opt options/train/TMP/train_TMP.yaml
. -
You can test the trained models using
python3 basicsr/train.py -opt options/test/TMP/test_TMP.yaml
. -
Please download the pretrained models from OneDrive.
please modify the paths of dataset and the trained model in the corresponding config file mannually
please refer to the paper for more results.
@misc{zhang2023tmp,
title={TMP: Temporal Motion Propagation for Online Video Super-Resolution},
author={Zhengqiang Zhang and Ruihuang Li and Shi Guo and Yang Cao and Lei Zhang},
year={2023},
eprint={2312.09909},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Please leave a issue or contact zhengqiang with zhengqiang.zhang@connect.polyu.hk
Great thanks to BasicSR. We build our project based on their codes. Specially, we implement the cuda version for TMP and corresponding network architectures. Please refere to basicsr/archs/tmp*
for more details.
This project is released under the Apache 2.0 license.
Please refer to BasicSR's LICENCE.md for more details of licence about the code in BasicSR.