-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training Code #5
Comments
Hi, We use 2 NVIDIA Tesla V100 GPUs to train the model. With your GPU of 24GB memory, I would suggest either reduce the batch size to 8, Or keep the batch size 16, but change n_feat=32 in the model file (This will make the model lighter but with a minor compromise on accuracy MIRNet/networks/MIRNet_model.py Line 347 in a668d27
Since the SIDD training images are of very high resolution, you can first crop patches offline (from the full resolution images) and then train the model on these patches. In our case, we cropped 350 patches of size 256x256 from each image (totalling 350*320 patches). Regarding the training code, we are aiming to release it in two weeks' time. |
Thank you so much for your reply! I will retry it! |
Hi! The performance of your method on the real noisy datasets is really attractive. I am trying to train the denoising model by following the instructions in the paper, but have some difficulties reproducing the results. It seems that the model is too large to train with batch size 16 on a GPU with 24GB memory. The training dataset is also very large such that it takes very long to process one epoch. Could I know the details of the training time and implementation devices? It would be better if you could share the training code. Looking forward to your reply! Thank you!
The text was updated successfully, but these errors were encountered: