You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the great work. I started training on the Stanford Cars dataset, but I encountered a GPU problem from epoch 0. I tried using torch.cuda.empty_cache(), but it did not work. Is there any proposed solution? Also, if the author could share the pretrained model for testing, thank you in advance.
The text was updated successfully, but these errors were encountered:
@Sondosmohamed1
Hi, thank you very much for your interest in my work.
I have never seen this problem before. I am not sure whether it would be a environment problem or just because the GPU is not big enough. Maybe you could try to not use netp (remove all the code related with netp = torch.nn.DataParallel(net, device_ids=[0]), and adapt other parts accrodingly).
Regarding the pre-trained weights, I am sorry I did not keep them after my graduation. But I have a new approach based on progressive learning and distillation, which has better performance and smaller parameters than this repository. The code and pretrained weights of the new approach will be publicly available, and the paper will be released soon.
Thank you for the great work. I started training on the Stanford Cars dataset, but I encountered a GPU problem from epoch 0. I tried using torch.cuda.empty_cache(), but it did not work. Is there any proposed solution? Also, if the author could share the pretrained model for testing, thank you in advance.
The text was updated successfully, but these errors were encountered: