From d1318320632dea889a3eb5287dea07ca320673aa Mon Sep 17 00:00:00 2001 From: "Wang, Mengni" Date: Tue, 28 Nov 2023 09:35:52 +0800 Subject: [PATCH] Enhance doc for DmlExecutionProvider (#1419) --- docs/source/quantization.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/source/quantization.md b/docs/source/quantization.md index e55eb5aef3d..92994d044c5 100644 --- a/docs/source/quantization.md +++ b/docs/source/quantization.md @@ -524,7 +524,11 @@ Intel(R) Neural Compressor support multi-framework: PyTorch, Tensorflow, ONNX Ru

-> Note: DmlExecutionProvider support works as experimental, please expect exceptions. +> ***Note*** +> +> DmlExecutionProvider support works as experimental, please expect exceptions. +> +> Known limitation: the batch size of onnx models has to be fixed to 1 for DmlExecutionProvider, no multi-batch and dynamic batch support yet. Examples of configure: ```python