Skip to content

(NIPS 2022) Rethinking Alignment in Video Super-Resolution Transformers

Notifications You must be signed in to change notification settings

XPixelGroup/RethinkVSRAlignment

Repository files navigation

Rethinking Alignment in Video Super-Resolution Transformers(NIPS 2022)

Shuwei Shi*, Jinjin Gu*, Liangbin Xie, Xintao Wang, Yujiu Yang and Chao Dong
arxiv | pretrained models | visual results

arXiv visitors GitHub Stars

This repository is the official PyTorch implementation of "Rethinking Alignment in Video Super-Resolution Transformers" (arxiv,pretrained models). PSRT-recurrent ahcieves state-of-the-art performance in

  • Video SR (REDS, Vimeo90K, Vid4)

The alignment of adjacent frames is considered an essential operation in video super-resolution (VSR). Advanced VSR models, including the latest VSR Transformers, are generally equipped with well-designed alignment modules. However, the progress of the self-attention mechanism may violate this common sense. In this paper, we rethink the role of alignment in VSR Transformers and make several counter-intuitive observations. Our experiments show that: (i) VSR Transformers can directly utilize multi-frame information from unaligned videos, and (ii) existing alignment methods are sometimes harmful to VSR Transformers. These observations indicate that we can further improve the performance of VSR Transformers simply by removing the alignment module and adopting a larger attention window. Nevertheless, such designs will dramatically increase the computational burden, and cannot deal with large motions. Therefore, we propose a new and efficient alignment method called patch alignment, which aligns image patches instead of pixels. VSR Transformers equipped with patch alignment could demonstrate state-of-the-art performance on multiple benchmarks. Our work provides valuable insights on how multi-frame information is used in VSR and how to select alignment methods for different networks/datasets.

Patch Alignment

Update

[2022/10/31] refine codes and release visual results.

PSRT-recurrent

Requirements

  • Python 3.8, PyTorch >= 1.9.1
  • Requirements: see requirements.txt
  • Platforms: Ubuntu 18.04, cuda-11.1

Quick Testing

Download pretrained models and put them in the appropriate folder. Prepare the dataset and change the file path in the inference code.

# download code
git clone https://github.com/XPixelGroup/RethinkVSRAlignment
cd RethinkVSRAlignment
pip install -r requirements.txt
pip install basicsr
python setup.py develop

# video sr trained on REDS, tested on REDS4
python inference_psrtrecurrent_reds.py

# video sr trained on Vimeo, tested on Vimeo

python inference_psrtrecurrent_vimeo90k.py --vimeo data/meta_info_Vimeo90K_train_GT.txt --device 0

Training

Prepare the corresponding datasets follwing the quick test stage. For better I/O speed, you can follow data prepare instruction to convert .png datasets to .lmdb datasets.

# download code
git clone https://github.com/XPixelGroup/RethinkVSRAlignment
cd RethinkVSRAlignment
pip install -r requirements.txt
pip install basicsr
python setup.py develop

# video sr trained on REDS, tested on REDS4
bash dist_train.sh 8 options/4126_PSRTRecurrent_mix_precision_REDS_600K_N16.yml

# video sr trained on Vimeo, validated on Vid4

bash dist_train.sh 8 options/5123_PSRTRecurrent_mix_precision_Vimeo_300K_N14.yml

Results

Citation

@article{shi2022rethinking,
  title={Rethinking Alignment in Video Super-Resolution Transformers},
  author={Shi, Shuwei and Gu, Jinjin and Xie, Liangbin and Wang, Xintao and Yang, Yujiu and Dong, Chao},
  journal={arXiv preprint arXiv:2207.08494},
  year={2022}
}

Acknowledgment

Our codes was built on BasicSR and partially borrowed from mmediting.

About

(NIPS 2022) Rethinking Alignment in Video Super-Resolution Transformers

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published