Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Add Stopping Criteria to Grid Search #759

Open
meganjkurka opened this issue Jun 18, 2024 · 3 comments
Open

[FEATURE] Add Stopping Criteria to Grid Search #759

meganjkurka opened this issue Jun 18, 2024 · 3 comments
Labels
type/feature Feature request

Comments

@meganjkurka
Copy link

🚀 Feature

Add the ability to stop Grid Search if the validation metric exceeds a certain value or if the models are not getting better (i.e. stop if BLEU hasn’t improved by 1 over the 5 best models).

Motivation

Automatic stopping will ensure that the grid search is stopped once the models are "good enough" and there is no unnecessary waste of resources.

@meganjkurka meganjkurka added the type/feature Feature request label Jun 18, 2024
@tmostak
Copy link

tmostak commented Jun 18, 2024

I'd like to upvote this... grid search is great but a lot of time/GPU usage is spent on runs where it is clear within the first 10-30% of the training run that the validation metric is not going to be competitive.

For my use case at least, it would be ideal to specify that the specific run should be terminated if the validation metric exceeds X at Y% into the training run.

@pascal-pfeiffer
Copy link
Collaborator

@meganjkurka is that the same that you are looking for? Early stoppinglkilling of an experiment if a threshold validation metric is not met.

@pascal-pfeiffer
Copy link
Collaborator

pascal-pfeiffer commented Jun 19, 2024

stop if BLEU hasn’t improved by 1 over the 5 best models

@meganjkurka , trying to understand what it expected from the feature: Could you please clarifiy how exactly this is expected to work. Grid search experiments are started in a random order. So, if by chance the first 5 models are all same bad (say all BLEU <1) it would stop? Even if later experiments would be great?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/feature Feature request
Projects
None yet
Development

No branches or pull requests

3 participants