You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to perform evaluation task for your code without doing the trainig process for myself. I am using pre-trained model ([pretrained.pth.tar]) & spynet weights available in the Evaluation section of this GitHub page. But I am not getting results (for evaluation) as described in the paper. I have genereated LR images using cv2.resize() with cv2.INTER_CUBIC as the interpolation parameter (image alreday converted to LAB color scheme using cv2.COLOR_BGR2LAB ). Should i have to re-train the model myself or should i generate LR frames using MatlaB resize(as told in the paper) or I am doing some other mistake. Please guide.
My results example: For the 'walk' video frames of the dataset Vid4 i am getting PSNR-RGB of about 21.08.
Evalution command i used is : !python evaluate.py --lr_dir=lr-set-lab --key_dir=key-set --target_dir=hr-set --output_dir=sr-set --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt="frame%d.png"
The text was updated successfully, but these errors were encountered:
Hi @AtiqEmenent, Matlab resize performs anti-aliasing along with the interpolation, which gives very different results compared to cv2.resize. This was something I stumbled upon during the development as well. But most of the prior works use Matlab's function, so I used the same for easier comparison. Here's a reference Matlab function for bicubic downsampling. Also make sure normalization is similar to what the pretrained model uses here during the RGB to lab conversion.
I am trying to perform evaluation task for your code without doing the trainig process for myself. I am using pre-trained model ([pretrained.pth.tar]) & spynet weights available in the Evaluation section of this GitHub page. But I am not getting results (for evaluation) as described in the paper. I have genereated LR images using cv2.resize() with cv2.INTER_CUBIC as the interpolation parameter (image alreday converted to LAB color scheme using cv2.COLOR_BGR2LAB ). Should i have to re-train the model myself or should i generate LR frames using MatlaB resize(as told in the paper) or I am doing some other mistake. Please guide.
My results example: For the 'walk' video frames of the dataset Vid4 i am getting PSNR-RGB of about 21.08.
Evalution command i used is : !python evaluate.py --lr_dir=lr-set-lab --key_dir=key-set --target_dir=hr-set --output_dir=sr-set --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt="frame%d.png"
The text was updated successfully, but these errors were encountered: