Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting Low value of PSNR when evaluating wdsr-a-8-x2 on Set5 dataset #19

Open
Vishal2188 opened this issue Apr 30, 2019 · 13 comments
Open

Comments

@Vishal2188
Copy link

The evaluation results for DIV2K validation dataset are similar to the mentioned one. But when I tried to evaluate on Set5, Set14 and other benchmark datasets, the results which I got is very low. These results should be more that 36 db of PSNR for x2 scale factor but I got nearly about 31 db of PSNR for x2 scale factor on Set5 dataset.

In your evaluation code, I just replace the DIV2K validation set images with Set5 images.

Can you help me to solve this problem?

@xliucs
Copy link

xliucs commented Feb 13, 2020

Hi, Have you solved this problem? I also got similar issue on Set 5 with EDSR model. Thanks!

@krasserm
Copy link
Owner

krasserm commented Feb 14, 2020

DIV2K bicubic downscaled images have been created with the MATLAB imresize function. It is important that you create downscaled versions of Set5 images in the very same way. A Python version of MATLAB imresize is available here. For example, to bicubic downscale by a factor of 4 use imresize(x, 0.25, method='bicubic') and then feed the downscaled image into a pre-trained model.

I just tested this with Set5 at scale 4. When using the function above I'm getting a PSNR above 30, when using tf.image.imresize(x, ..., ResizeMethod.BICUBIC) I'm getting a PSNR of about 21 (!!). I'll add this to the documentation later and maybe also provide some example code (incl. the imresize metnioned above). Attached is an example of a downscaled, EDSR-super-resolved and original image from Set5:

Downscaled:

baby_x4

Super-resolved:

baby_sr

Original:

baby

@xliucs
Copy link

xliucs commented Feb 14, 2020

@krasserm Thanks for getting back to me. I have used your method to calcuate PSNR on Set 5 (using the pre-trained model you provided). The average PSNR I could get is 30.04. However, the paper claims 32.46. The PSNR for this baby one I got is 32.235966. Were you able to get average of 32.46 on Set5? Thanks very much for your contribution.

@krasserm
Copy link
Owner

The provided EDSR model in this repository is their single-scale baseline model with 16 residual blocks. The reported PSNR of 32.46 is for their EDSR model with 32 residual blocks.

Running an evaluation of the baseline model on the DIV2K validation set (using this function) gives a PSNR of 28.89 which is only marginally lower as they report for their baseline model (28.94, see Table 2 in the EDSR paper).

However, I can confirm that I also get a PSNR of 30.04 on Set5 using the baseline model. Given the small differences in PSNR between the 16-blocks and 32-blocks EDSR model, I'd also expect a higher PSNR value on Set5.

On the other hand, given that the DIV2K evaluation results are almost identical I rather suspect that there is still an issue related to Set5 image down-scaling. For example, I didn't actually verify if the down-scaling Python code really gives the same result as the MATLAB code (will do that later).

Furthermore, in the paper they also remove 6+scale pixels from the border from HR and SR images before calculating PSNR. In this repository, I do not remove this border which might explain the small difference of 0.05 PSNR on the DIV2K validation set (between the provided and their baseline model).

It would be probably better to allow a user to specify a down-scaling function for training on DIV2K and actually down-scale DIV2K HR images during training using this function instead of using DIV2K LR images directly. This down-scaling function could then be re-used for evaluation on other datasets as well.

@xliucs
Copy link

xliucs commented Feb 17, 2020

@krasserm Thanks for your detailed explaination! That is super helpful. I am also wondering how do you handle images on Set14 with odd number pixels in width and height. For example, an HR image could be 513 by 513. How do you deal with this case? Do you downsample the image to 512 by 512 first then perform downscale to 128 by 128 and feed into network? Do you know is there a standard way on this task? I found that Urban100 and Set14 have tons of images like that. Thanks very much again!

@krasserm
Copy link
Owner

@XIN71 you're welcome, glad it was helpful. Regarding images with an odd number of pixels in a dimension, I'd rather crop the largest possible even-by-even image instead of downsampling as downsampling could potentially change the values of all pixels in the image. So in the worst case you exclude height + width - 1 pixels in the HR image from evaluation which hopefully can be neglected compared to a total number of height * width pixels. Not sure if there is a "standard way" to deal with this situation.

@xliucs
Copy link

xliucs commented Feb 17, 2020

@krasserm That makes perfect sense. Thanks very much!

@leolya
Copy link

leolya commented Feb 20, 2020

I also used pretrained EDSR x4 model on Set5. To get similar PSNR with the original paper, I tested several things. First, the difference between matlab imresize function and the python version can be neglected. Second, in the original paper, only PSNR on the Y channel are measured and same amount of pixels as scales from the border are ignored. So, for x4, 4 pixels from each border are cropped before evaluation. Note that in the original paper, Matlab was used to convert RGB to YCbCr. Using opencv to convert the color space will lead to different results. As shown in the table, I got 32.0222dB on Set5, which is very close to 32.46dB from the paper.

  RGB channels RGB channels Y channel only Y channel only
Python imresize Matlab imresize Matlab YCbCr Opencv YCbCr
Without cropping 30.0404 30.0401 31.9144 30.6047
Crop 4 pixels 30.1392 30.1388 32.0222 30.7129

@krasserm
Copy link
Owner

@leolya wow, thanks for this useful feedback, much appreciated!

@nanmehta
Copy link

HI, sir thanks for the wonderful code, I need your help.
I am confused between tf. image.resize and imresize function, is their any difference between them.

@nahidalam
Copy link

@krasserm Tensorflow does not have tf.image.imresize
Instead it has tf.imge.resize

So I am not sure when you said

when using tf.image.imresize(x, ..., ResizeMethod.BICUBIC) I'm getting a PSNR of about 21

how is it working. Am I missing something? Can you please provide an example?

@krasserm
Copy link
Owner

It's a typo, it should be tf.image.resize. Thanks for clarifying!

@Jiuruan
Copy link

Jiuruan commented Aug 3, 2024

Can I ask how many time it took to finish training a ESDR model and get the similar result to the paper? And waht is your GPU or environment specification?

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants