Skip to content

Commit

Permalink
Fix documentation (#369)
Browse files Browse the repository at this point in the history
* fix class

* Update optimization.mdx
  • Loading branch information
philschmid authored Sep 6, 2022
1 parent cc5bb35 commit 815a762
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 6 deletions.
6 changes: 3 additions & 3 deletions docs/source/onnxruntime/optimization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ The `ORTOptimizer` class is used to optimize your ONNX model. The class can be i
1. Using an already initialized `ORTModelForXXX` class.

```python
>>> from optimum.onnxruntime import ORTOptimizer, ORTModelForTextClassification
>>> from optimum.onnxruntime import ORTOptimizer, ORTModelForSequenceClassification

# Loading ONNX Model from the Hub
>>> model = ORTModelForTextClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
# Create an optimizer from an ORTModelForXXX
>>> optimizer = ORTOptimizer.from_pretrained(model)
Expand Down Expand Up @@ -112,4 +112,4 @@ Below you will find an easy end-to-end example on how to optimize a Seq2Seq mode
## ORTOptimizer

[[autodoc]] onnxruntime.optimization.ORTOptimizer
- all
- all
6 changes: 3 additions & 3 deletions docs/source/onnxruntime/quantization.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ The `ORTQuantizer` class is used to quantize your ONNX model. The class can be i
1. Using an already initialized `ORTModelForXXX` class.

```python
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForTextClassification
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification

# Loading ONNX Model from the Hub
>>> ort_model = ORTModelForTextClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> ort_model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
# Create a quantizer from a ORTModelForXXX
>>> quantizer = ORTQuantizer.from_pretrained(ort_model)
Expand Down Expand Up @@ -169,4 +169,4 @@ The `ORTQuantizer` currently doesn't support multi-file models, like `ORTModelFo
## ORTQuantizer

[[autodoc]] onnxruntime.quantization.ORTQuantizer
- all
- all

0 comments on commit 815a762

Please sign in to comment.