run_qa.py script does not compute eval_loss and gives KeyError: 'eval_loss' with load_best_model_at_end #29801
Closed
2 of 4 tasks
Labels
bug
Examples
Which is related to examples in general
Good Second Issue
Issues that are more difficult to do than "Good First" issues - give it a try if you want!
System Info
transformers
version: 4.39.0Who can help?
The official QA training script
run_qa.py
return the following error with--load_best_model_at_end
and--metric_for_best_model "loss"
.full_log.txt
@ArthurZucker @sgugger
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
To reproduce the error in quick way you can use a distilled model and limit the max_train_samples and max_eval_samples:
I have tested this bug also with a normal model and a full dataset but the error is always there.
Expected behavior
During the evaluation phase the eval_loss should be computed and the best model should be saved using the loss metric.
The text was updated successfully, but these errors were encountered: