Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ORT optimizer refactorization #294

Merged
merged 28 commits into from
Aug 24, 2022
Merged

ORT optimizer refactorization #294

merged 28 commits into from
Aug 24, 2022

Conversation

echarlaix
Copy link
Collaborator

Refactorization of ORTOptimizer

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

@echarlaix echarlaix marked this pull request as ready for review August 23, 2022 10:57
Copy link
Contributor

@regisss regisss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work @echarlaix 🔥
I just left a few minor comments regarding example READMEs.

Copy link
Member

@philschmid philschmid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Awesome work 🔥✅

@echarlaix echarlaix merged commit fb7e303 into main Aug 24, 2022
@echarlaix echarlaix deleted the ort-optimizer-refactorization branch August 24, 2022 12:31
JingyaHuang added a commit that referenced this pull request Sep 7, 2022
* Override export of ORTSeq2SeqTrainer

* Do not force download by default in ORTModel (#356)

* Update OnnxConfigWithLoss wrapper

* ORT optimizer refactorization (#294)

* Refactorization of ORTOptimizer

* Refactorization of ORTModel

* Adapt examples according to refactorization

* Adapt tests

* Fix style

* Remove quantizer modification

* Fix style

* Apply modifications from #270 for quantizer and optimizer to have same behavior

* Add test for optimization of Seq2Seq models

* Fix style

* Add ort config saving when optimizing a model

* Add ort config saving when quantizing a model

* Add tests

* Fix style

* Adapt optimization examples

* Fix readme

* Remove unused parameter

* Adapt quantization examples

* Fix quantized model and ort config saving

* Add documentation

* Add model configuration saving to simplify loading of optimized model

* Fix style

* Fix description

* Fix quantization tests

* Remove opset argument which is onnx config default opset when exporting with ORTModels

* Fix import (#360)

* Fix export of decoders

* Add flag to export only decoders

* Fix ORTTrainer inference ort subclass parsing

* Fix filenames when empty suffix given (#363)

* fix(optimization): handle empty file suffix

* fix(quantization): handle empty file suffix

* use pathlibfor save_dir

* run test again

* Update optimum/onnxruntime/quantization.py

Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>

* ReRun test that failed because of cache (network)

Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>

* Override the evaluation and prediction loop in ORTSeq2SeqTrainer

* Fix documentation (#369)

* fix class

* Update optimization.mdx

* Fix label smoother device prob

* Fix lm_logits and labels dimension mismatch

* Clean up

Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
Co-authored-by: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Co-authored-by: Pierre Snell <ierezell@gmail.com>
Co-authored-by: Pierre Snell <pierre.snell@botpress.com>
Co-authored-by: Philipp Schmid <32632186+philschmid@users.noreply.github.com>
JingyaHuang added a commit that referenced this pull request Oct 2, 2022
* Inference with ORTModel

* Clean up unused imports

* Replace Inference session by ort model

* Inference with ORTModel

* Clean up unused imports

* Replace Inference session by ort model

* Update modeling for custom tasks

* Replace in evaluation_loop

* refectoring prediction_loop

* ORTSeq2SeqTrainer refactoring - Inference with ORTModel (#359)

* Override export of ORTSeq2SeqTrainer

* Do not force download by default in ORTModel (#356)

* Update OnnxConfigWithLoss wrapper

* ORT optimizer refactorization (#294)

* Refactorization of ORTOptimizer

* Refactorization of ORTModel

* Adapt examples according to refactorization

* Fix ORTTrainer inference ort subclass parsing

* Replace datasets.load_metric by evaluate

* Add summarization example

* Enable ORT inference

* Fix inference args

* Mention ORT inference in READMEs

* Remove repetitve code in Trainer

* Update examples to trfrs 4.22.1

* Fix qa example prediction error

* Update summarization/README.md

* Fix logger consistency

* Make readme consistent with trfrs

* Put back onnx config with past and loss test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants