-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting Low value of PSNR when evaluating wdsr-a-8-x2 on Set5 dataset #19
Comments
Hi, Have you solved this problem? I also got similar issue on Set 5 with EDSR model. Thanks! |
DIV2K bicubic downscaled images have been created with the MATLAB I just tested this with Set5 at scale 4. When using the function above I'm getting a PSNR above 30, when using Downscaled: Super-resolved: Original: |
@krasserm Thanks for getting back to me. I have used your method to calcuate PSNR on Set 5 (using the pre-trained model you provided). The average PSNR I could get is 30.04. However, the paper claims 32.46. The PSNR for this baby one I got is 32.235966. Were you able to get average of 32.46 on Set5? Thanks very much for your contribution. |
The provided EDSR model in this repository is their single-scale baseline model with 16 residual blocks. The reported PSNR of 32.46 is for their EDSR model with 32 residual blocks. Running an evaluation of the baseline model on the DIV2K validation set (using this function) gives a PSNR of 28.89 which is only marginally lower as they report for their baseline model (28.94, see Table 2 in the EDSR paper). However, I can confirm that I also get a PSNR of 30.04 on Set5 using the baseline model. Given the small differences in PSNR between the 16-blocks and 32-blocks EDSR model, I'd also expect a higher PSNR value on Set5. On the other hand, given that the DIV2K evaluation results are almost identical I rather suspect that there is still an issue related to Set5 image down-scaling. For example, I didn't actually verify if the down-scaling Python code really gives the same result as the MATLAB code (will do that later). Furthermore, in the paper they also remove 6+scale pixels from the border from HR and SR images before calculating PSNR. In this repository, I do not remove this border which might explain the small difference of 0.05 PSNR on the DIV2K validation set (between the provided and their baseline model). It would be probably better to allow a user to specify a down-scaling function for training on DIV2K and actually down-scale DIV2K HR images during training using this function instead of using DIV2K LR images directly. This down-scaling function could then be re-used for evaluation on other datasets as well. |
@krasserm Thanks for your detailed explaination! That is super helpful. I am also wondering how do you handle images on Set14 with odd number pixels in width and height. For example, an HR image could be 513 by 513. How do you deal with this case? Do you downsample the image to 512 by 512 first then perform downscale to 128 by 128 and feed into network? Do you know is there a standard way on this task? I found that Urban100 and Set14 have tons of images like that. Thanks very much again! |
@XIN71 you're welcome, glad it was helpful. Regarding images with an odd number of pixels in a dimension, I'd rather crop the largest possible even-by-even image instead of downsampling as downsampling could potentially change the values of all pixels in the image. So in the worst case you exclude |
@krasserm That makes perfect sense. Thanks very much! |
I also used pretrained EDSR x4 model on Set5. To get similar PSNR with the original paper, I tested several things. First, the difference between matlab imresize function and the python version can be neglected. Second, in the original paper, only PSNR on the Y channel are measured and same amount of pixels as scales from the border are ignored. So, for x4, 4 pixels from each border are cropped before evaluation. Note that in the original paper, Matlab was used to convert RGB to YCbCr. Using opencv to convert the color space will lead to different results. As shown in the table, I got 32.0222dB on Set5, which is very close to 32.46dB from the paper.
|
@leolya wow, thanks for this useful feedback, much appreciated! |
HI, sir thanks for the wonderful code, I need your help. |
@krasserm Tensorflow does not have So I am not sure when you said
how is it working. Am I missing something? Can you please provide an example? |
It's a typo, it should be |
Can I ask how many time it took to finish training a ESDR model and get the similar result to the paper? And waht is your GPU or environment specification? Thank you. |
The evaluation results for DIV2K validation dataset are similar to the mentioned one. But when I tried to evaluate on Set5, Set14 and other benchmark datasets, the results which I got is very low. These results should be more that 36 db of PSNR for x2 scale factor but I got nearly about 31 db of PSNR for x2 scale factor on Set5 dataset.
In your evaluation code, I just replace the DIV2K validation set images with Set5 images.
Can you help me to solve this problem?
The text was updated successfully, but these errors were encountered: