-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ORT training examples & add summarization example #383
Conversation
…ngface/optimum into refactoring-orttrainer-inf
* Override export of ORTSeq2SeqTrainer * Do not force download by default in ORTModel (#356) * Update OnnxConfigWithLoss wrapper * ORT optimizer refactorization (#294) * Refactorization of ORTOptimizer * Refactorization of ORTModel * Adapt examples according to refactorization * Adapt tests * Fix style * Remove quantizer modification * Fix style * Apply modifications from #270 for quantizer and optimizer to have same behavior * Add test for optimization of Seq2Seq models * Fix style * Add ort config saving when optimizing a model * Add ort config saving when quantizing a model * Add tests * Fix style * Adapt optimization examples * Fix readme * Remove unused parameter * Adapt quantization examples * Fix quantized model and ort config saving * Add documentation * Add model configuration saving to simplify loading of optimized model * Fix style * Fix description * Fix quantization tests * Remove opset argument which is onnx config default opset when exporting with ORTModels * Fix import (#360) * Fix export of decoders * Add flag to export only decoders * Fix ORTTrainer inference ort subclass parsing * Fix filenames when empty suffix given (#363) * fix(optimization): handle empty file suffix * fix(quantization): handle empty file suffix * use pathlibfor save_dir * run test again * Update optimum/onnxruntime/quantization.py Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * ReRun test that failed because of cache (network) Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> * Override the evaluation and prediction loop in ORTSeq2SeqTrainer * Fix documentation (#369) * fix class * Update optimization.mdx * Fix label smoother device prob * Fix lm_logits and labels dimension mismatch * Clean up Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com> Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> Co-authored-by: Pierre Snell <ierezell@gmail.com> Co-authored-by: Pierre Snell <pierre.snell@botpress.com> Co-authored-by: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
The documentation is not available anymore as the PR was closed or merged. |
P.S. This PR is based on the implementation of ORT inference for |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly typos.
I just have a question regarding the use of [INFO]
or [ERROR]
with logger.warning
. It looks weird to me but I guess there is a good reason to do it 🙂
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
…ngface/optimum into ort-seq2seq-train-examples
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM @JingyaHuang !!
What does this PR do?
ORTSeq2SeqTrainer
.inference_with_ort
in training examples.