You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction (minimal, reproducible, runnable)
Can be reproduced by running the current image-classification finetune example under optimum/examples/onnxruntime/training/image-classification/run_image_classification.py with the following run command:
Recently there was a change introduced by huggingface/transformers#27326 to log gradient norm in transformer's trainer. These changes are not reflected in optimum repo resulting in the following error:
Traceback (most recent call last):
File "run_image_classification.py", line 451, in <module>
main()
File "run_image_classification.py", line 425, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 392, in train
return inner_training_loop(
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/optimum/onnxruntime/trainer.py", line 774, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
TypeError: _maybe_log_save_evaluate() missing 1 required positional argument: 'ignore_keys_for_eval'
WORKAROUND: adjust trainer.py to pass None where group_norm input is expected as that is the default setting.
System Info
Who can help?
@amyeroberts @JingyaHuang @regisss
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction (minimal, reproducible, runnable)
Can be reproduced by running the current image-classification finetune example under
optimum/examples/onnxruntime/training/image-classification/run_image_classification.py
with the following run command:Expected behavior
Recently there was a change introduced by huggingface/transformers#27326 to log gradient norm in transformer's trainer. These changes are not reflected in optimum repo resulting in the following error:
WORKAROUND: adjust trainer.py to pass
None
where group_norm input is expected as that is the default setting.The text was updated successfully, but these errors were encountered: