Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

early stopping in training #294

Closed
vishnukv64 opened this issue Jul 4, 2020 · 3 comments
Closed

early stopping in training #294

vishnukv64 opened this issue Jul 4, 2020 · 3 comments
Labels
Stale Stale and schedule for closing soon

Comments

@vishnukv64
Copy link

Hey,
I just wanted to know how to automatically stop the training if the loss doesnt decrease for say 10 epochs and save the best and last weights?

if there is snothing i could do with the results.txt or something

Thanks

@NanoCode012
Copy link
Contributor

NanoCode012 commented Jul 4, 2020

Hm, if you run it without --nosave, you can just cntrl+c the run. Then call ,

yolov5/utils/utils.py

Lines 627 to 633 in 5e2429e

def strip_optimizer(f='weights/best.pt'): # from utils.utils import *; strip_optimizer()
# Strip optimizer from *.pt files for lighter files (reduced by 1/2 size)
x = torch.load(f, map_location=torch.device('cpu'))
x['optimizer'] = None
x['model'].half() # to FP16
torch.save(x, f)
print('Optimizer stripped from %s' % f)

Put below in yolov5 folder, then call via python <filename>

from utils.utils import strip_optimizer;

strip_optimizer('weights/best.pt')

@glenn-jocher
Copy link
Member

@github-actions
Copy link
Contributor

github-actions bot commented Aug 4, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Stale Stale and schedule for closing soon
Projects
None yet
Development

No branches or pull requests

3 participants