Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add warning message for run_qa.py #29867

Merged
merged 2 commits into from
Mar 30, 2024
Merged

Conversation

jla524
Copy link
Contributor

@jla524 jla524 commented Mar 26, 2024

What does this PR do?

Fixes #29801 (issue)

When running with --load_best_model_at_end, --metric_for_best_model is set to "loss" by default. However, loss is not supported by squad or squad_v2.

We can throw a warning to let the users know to explicitly set the metric to one supported by squad or squad_v2.

Who can review?

@ftesser and @ArthurZucker

@SunMarc SunMarc self-requested a review March 26, 2024 16:38
Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense, LGTM ! Instead of raising an error, let's raise a warning if we don't put all available metrics (e.g. squad2 have other metrics such as HasAns_exact,....)

@jla524
Copy link
Contributor Author

jla524 commented Mar 26, 2024

Thanks @SunMarc! I've updated the PR to raise a warning instead. It'll show a warning like this:

/Users/jacky/repos/transformers/examples/pytorch/question-answering/run_qa.py:636: UserWarning: --metric_for_best_model should be set to one of ('exact_match', 'f1')
  warnings.warn(f"--metric_for_best_model should be set to one of {accepted_best_metrics}")

@jla524 jla524 changed the title Improve error message for run_qa.py Add warning message for run_qa.py Mar 26, 2024
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@ArthurZucker ArthurZucker merged commit 156d30d into huggingface:main Mar 30, 2024
7 checks passed
@jla524 jla524 deleted the run_qa_args branch April 12, 2024 05:26
ArthurZucker pushed a commit that referenced this pull request Apr 22, 2024
* improve: error message for best model metric

* update: raise warning instead of error
itazap pushed a commit that referenced this pull request May 14, 2024
* improve: error message for best model metric

* update: raise warning instead of error
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

run_qa.py script does not compute eval_loss and gives KeyError: 'eval_loss' with load_best_model_at_end
3 participants