This repository is the official PyTorch implementation for our proposed MSWSR. The code is developed by supercaoO (Huanrong Zhang) based on WSR. In the future, the update will be released in supercaoO/MSWSR first.
We propose a lightweight and fast network (MSWSR) to implement multi-scale SR simultaneously by learning multi-level wavelet coefficients of the target image. The proposed network is composed of one CNN part and one RNN part. The CNN part is used for predicting the highest-level low-frequency wavelet coefficients, while the RNN part is used for predicting the rest frequency bands of wavelet coefficients. Moreover, the RNN part is extendable to more scales. For further lightweight, a non-square (side window) convolution kernel is proposed to reduce the network parameters.
The framework of the proposed MSWSR for (2, 4, 8)x SR tasks. All of the recurrent blocks (RBs) share the same weights.
The details about the S-IMDB can be found in the early access version of our main paper.
If you find our work useful in your research or publications, please consider citing:
@article{zhang2021mswsr,
author = {Zhang, Huanrong and Xiao, Jie and Jin, Zhi},
title = {Multi-scale Image Super-Resolution via A Single Extendable Deep Network},
journal = {IEEE Journal of Selected Topics in Signal Processing},
year= {2020},
volume={},
number={},
pages={1-1},
doi={10.1109/JSTSP.2020.3045282}
}
- cuda & cudnn
- Python 3
- PyTorch >= 1.0.0
- pytorch_wavelets
- tqdm
- cv2
- pandas
- skimage
- scipy = 1.0.0
- Matlab
-
Clone this repository and cd to
MSWSR
:git clone https://github.com/supercaoO/MSWSR.git cd MSWSR
-
Check if the pre-trained model
MSWSR_x248.pth
exists in./models
. -
Then, run following commands for evaluation on Set5:
CUDA_VISIBLE_DEVICES=0 python test.py -opt options/test/test_MSWSR_Set5_x248.json
-
Finally, PSNR/SSIM values for Set5 are shown on your terminal, you can find the reconstruction images in
./results/SR/BI
.
-
If you have cloned this repository, you can first download SR benchmark (Set5, Set14, B100, Urban100 and Manga109) from GoogleDrive (provided by SRFBN_CVPR19) or BaiduYun (code: p9pf).
-
Run
./results/Prepare_TestData_HR_LR.m
in Matlab to generate HR/LR images with BI degradation model. -
Edit
./options/test/test_WSR_x248.json
for your needs according to./options/test/README.md
. -
Then, run command:
cd WSR CUDA_VISIBLE_DEVICES=0 python test.py -opt options/test/test_WSR_x248.json
-
Finally, PSNR/SSIM values are shown on your terminal, you can find the reconstruction images in
./results/SR/BI
. You can further evaluate SR results using./results/Evaluate_PSNR_SSIM.m
.
-
If you have cloned this repository, you can first place your own images to
./results/LR/MyImage
. -
Edit
./options/test/test_MSWSR_own.json
for your needs according to./options/test/README.md
. -
Then, run command:
cd MSWSR CUDA_VISIBLE_DEVICES=0 python test.py -opt options/test/test_MSWSR_own.json
-
Finally, you can find the reconstruction images in
./results/SR/MyImage
.
-
Download training set DIV2K from official link or BaiduYun (code: m84q).
-
Run
./scripts/Prepare_TrainData_HR_LR.m
in Matlab to generate HR/LR training pairs with BI degradation model and corresponding scale factor. -
Run
./results/Prepare_TestData_HR_LR.m
in Matlab to generate HR/LR test images with BI degradation model and corresponding scale factor, and choose one of SR benchmark for evaluation during training. -
Edit
./options/train/train_WSR_x248.json
for your needs according to./options/train/README.md
. -
Then, run command:
cd MSWSR CUDA_VISIBLE_DEVICES=0 python train.py -opt options/train/train_WSR_x248.json
-
You can monitor the training process in
./experiments
. -
Finally, you can follow the Test Instructions to evaluate your model.
The inference time is measured on B100 dataset (100 images) using Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz and NVIDIA TITAN RTX GPU. More detailed settings of comparison can be found in our main paper.
Comparisons on the number of network parameters, FLOPs, inference time, and PSNR/SSIM/LPIPS of different single-scale SR methods. Down arrow: Lower is better. Up arrow: Higher is better. Best and second best results are marked in red and blue, respectively.
Comparisons on the number of network parameters, FLOPs, and inference time of different single-scale SR methods with higher PSNR/SSIM/LPIPS. Down arrow: Lower is better. Up arrow: Higher is better. Best and second best results are marked in red and blue, respectively.
Comparisons on the number of network parameters, FLOPs, inference time, and PSNR/SSIM/LPIPS of different multi-scale SR methods. Down arrow: Lower is better. Up arrow: Higher is better. Best and second best results are marked in red and blue, respectively.
Visual comparisons of (2, 4, 8)x SR with different SR advances including single-scale SR networks (i.e., DRCN, and EDSR) and multi-scale SR networks (i.e., LapSRN, and Meta-RDN).