Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Issue During Stanford Cars Training: Seeking Solutions and Request for Pretrained Model #7

Open
Sondosmohamed1 opened this issue Dec 8, 2023 · 1 comment

Comments

@Sondosmohamed1
Copy link

Thank you for the great work. I started training on the Stanford Cars dataset, but I encountered a GPU problem from epoch 0. I tried using torch.cuda.empty_cache(), but it did not work. Is there any proposed solution? Also, if the author could share the pretrained model for testing, thank you in advance.

image

@Dichao-Liu
Copy link
Owner

@Sondosmohamed1
Hi, thank you very much for your interest in my work.

I have never seen this problem before. I am not sure whether it would be a environment problem or just because the GPU is not big enough. Maybe you could try to not use netp (remove all the code related with netp = torch.nn.DataParallel(net, device_ids=[0]), and adapt other parts accrodingly).

Regarding the pre-trained weights, I am sorry I did not keep them after my graduation. But I have a new approach based on progressive learning and distillation, which has better performance and smaller parameters than this repository. The code and pretrained weights of the new approach will be publicly available, and the paper will be released soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants