Skip to content

Commit ca35bf1

Browse files
authored
Keep flags OpTypesSupportPerChannelQuantization and QDQChannelAxis for internal use
Will have a follow-up PR to fine tune the code
1 parent fa51f26 commit ca35bf1

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

onnxruntime/python/tools/quantization/quantize.py

+1-3
Original file line numberDiff line numberDiff line change
@@ -195,9 +195,7 @@ def quantize_static(model_input,
195195
OpTypesToExcludeOutputQuantizatioin = list of op type : Default is []. If any op type is specified, it won't quantize
196196
the output of ops with this specific op types.
197197
DedicatedQDQPair = True/False : Default is False. When inserting QDQ pair, multiple nodes can share a single QDQ pair as their inputs.
198-
If True, it will create identical and dedicated QDQ pair for each node.
199-
OpTypesSupportPerChannelQuantization = list of op type : Default is []. List of op types that has per channel quantization support.
200-
QDQChannelAxis = Integer : Default is 0. Channel axis for QDQ pair when per_channel is True.
198+
If True, it will create identical and dedicated QDQ pair for each node.
201199
'''
202200

203201
mode = QuantizationMode.QLinearOps

0 commit comments

Comments
 (0)