Skip to content
/ RawSR Public

Exploiting raw images for real-scene super-resolution, TPAMI 2021

Notifications You must be signed in to change notification settings

xuxy09/RawSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Exploiting Raw Images for Real-Scene Super-Resolution

This repository is for the rawSR algorithm introduced in the TPAMI paper Exploiting raw images for real-scene super-resolution.

Conference version: Towards real scene super-resolution with raw images, CVPR 2019

Paper, Project

Contents

  1. Environment
  2. Introduction
  3. Train
  4. Test
  5. Results
  6. Reference

Environment

Our model is trained and tested through the following environment on Ubuntu:
  • Python: v2.7.5 with following packages:

    • tensorflow with gpu: v1.9.0

    • rawpy: v0.12.0

    • numpy: v1.15.3

    • scipy: v1.1.0

Introduction

Super-resolution is a fundamental problem in computer vision which aims to overcome the spatial limitation of camera sensors. While significant progress has been made for single image super-resolution, most existing algorithms only perform well on unrealistic synthetic data, which limits their applications in real scenarios. In this paper, we study the problem of real-scene single image super-resolution to bridge the gap between synthetic data and real captured images. Specifically, we focus on two problems of existing super-resolution algorithms: first, lack of realistic training data; second, insufficient utilization of the information recorded by cameras. To address the first issue, we propose a new pipeline to generate more realistic training data by simulating the imaging process of digital cameras. For the second problem, we develop a two-branch convolutional neural network to exploit the originally-recored radiance information in raw images. In addition, we propose a dense channel-attention block for better image restoration as well as a learning-based guided filter network for more effective color correction. Our model is able to generalize to different cameras without deliberately training on images from specific camera types. Extensive experiments demonstrate that the proposed algorithm can help recover fine details and clear structures, and more importantly, achieve high-quality results for single image super-resolution in real scenarios.

branch1 Fig. 5. The image restoration branch adopts the proposed DCA blocks in an encoder-decoder framework and reconstructs high-resolution linear color measurements from the degraded low-resolution raw input $X_{raw}$.

branch2

Fig. 7. Network architecture of the color correction branch. Our model predicts the pixel-wise transformations A and B with a reference color image.

Train

  • Prepare training data

    1. Download the raw super-resolution dataset (13040 training and 150 validation images) from [Google Drive][x2][x4]
    2. Place the downloaded dataset to './Dataset'
  • Begin to train

    1. (optional) Download the pretrained weights and place them in './log_dir'
    2. Run the following script to train our model:
      python train_and_test.py
      
    3. For different purposes, you can access './parameters.py' to change parameters according to the annotations. The default setting is to train the model from 0 epoch without pretrained weights, and the validation images will be leveraged to test model performance per 10 epochs.

Test

  • Prepare testing data

    • Synthetic data

      1. Download the synthetic testing dataset (150 images) from [Google Drive] [x2] [x4]]
        [BaiduNetdisk][x2]
      2. Place the downloaded dataset to './Dataset/' with folder name 'TESTING' or modify the 'TESTING_DATA_PATH' of parameters.py to the corresponding path.
    • Real data

      1. If you wish to test real data, you can prepare the raw image yourself, or download some examples from [Google Drive].
      2. Place the downloaded dataset or your prepared raw images (like .CR, .RAW, .NEF and etc.) to './Dataset/' with folder name 'REAL' or modify the 'REAL_DATA_PATH' of 'parameters.py' to corresponding path.
  • Begin to test

    • Synthetic data

      1. Set 'TRAINING' and 'TESTING' of 'parameters.py' to be False and True respectively.
      2. Download the pretrained models through [Google Drive] [x2], [x4], and then place it to './log_dir'.
    • Real image

      1. Set 'REAL' of 'parameters.py' to be True.
      2. Download the pretrained models through [Google Drive] [x2], [x4], and then place it to './log_dir'.

      And then, run the following script for testing: python train_and_test.py

      Notice, the testing results can be found in the path defined in 'RESULT_PATH' in 'parameters.py'.

Results

  • Quantitative comparisons

    TABLE 1 Quantitative evaluations on the synthetic dataset. “Blind” represents the images with variable blur kernels, and “Non-blind” denotes fixed kernel.

  • Visual comparisons

    • Synthetic data resSyn Fig. 8. Results from the proposed synthetic dataset. References for the baseline methods including SID[1], SRDenseNet[5] and RDN[6], can be found in Table 1. “GT” represents ground truth.

    • Real data resReal Fig. 11. Comparison with the state-of-the-arts on real-captured images. Since the outputs are of ultra-high resolution, spanning from 6048 × 8064 to 12416 × 17472, we only show image patches cropped from the tiny green boxes in (a). The input images from top to bottom are captured by Sony, Canon, iPhone 6s Plus, and Nikon cameras, respectively.

All mentioned results of our algorithm can be found at [Google Drive] [Blind], [Non-blind], [Real]

Reference

[1] C. Chen, Q. Chen, J. Xu, and V. Koltun. Learning to see in the dark. In CVPR, 2018.

[2] C. Dong, C. C. Loy, K. He, and X. Tang. Learning a deep convolutional network for image super-resolution. In ECCV, 2014.

[3] J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. In CVPR, 2016.

[4] E. Schwartz, R. Giryes, and A. M. Bronstein. Deepisp: Towards learning an end-to-end image processing pipeline. TIP, 2018.

[5] T. Tong, G. Li, X. Liu, and Q. Gao. Image super-resolution using dense skip connections. In ICCV, 2017.

[6] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu. Residual dense network for image super-resolution. In CVPR, 2018.

   

Please consider citing this paper if you find the code and data useful in your research:

@inproceedings{xu2019towards,
  title={Towards real scene super-resolution with raw images},
  author={Xu, Xiangyu and Ma, Yongrui and Sun, Wenxiu},
  booktitle={CVPR},
  year={2019}
}

@article{xu2021exploiting,
  title={Exploiting Raw Images for Real-Scene Super-Resolution},
  author={Xu, Xiangyu and Ma, Yongrui and Sun, Wenxiu and Yang, Ming-Hsuan},
  journal={TPAMI},
  year={2021}
}

About

Exploiting raw images for real-scene super-resolution, TPAMI 2021

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages