Skip to content

Commit

Permalink
Keras QAT Docs update for standalone batchnorms (#3654)
Browse files Browse the repository at this point in the history
Signed-off-by: Rishabh Thakur <quic_ristha@quicinc.com>
Co-authored-by: Rishabh Thakur <quic_ristha@quicinc.com>
Signed-off-by: Bharath Ramaswamy <quic_bharathr@quicinc.com>
  • Loading branch information
quic-ristha authored and quic-bharathr committed Sep 13, 2024
1 parent 5abb34c commit 4f7bc22
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 1 deletion.
2 changes: 1 addition & 1 deletion Docs/api_docs/keras_quantsim.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Code Examples

.. literalinclude:: ../keras_code_examples/quantization.py
:language: python
:lines: 37-40
:lines: 37-42

**Quantize with Fine tuning**

Expand Down
9 changes: 9 additions & 0 deletions Docs/keras_code_examples/quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@
import tensorflow as tf

from aimet_tensorflow.keras import quantsim
# Optional import only required for fine-tuning
from aimet_tensorflow.keras.quant_sim.qc_quantize_wrapper import QcQuantizeWrapper

def evaluate(model: tf.keras.Model, forward_pass_callback_args):
"""
Expand Down Expand Up @@ -68,6 +70,13 @@ def quantize_model():
sim.compute_encodings(evaluate, forward_pass_callback_args=(dummy_x, dummy_y))

# Do some fine-tuning
# Note:: For GPU workloads and models with non-trainable BatchNorms is not supported,
# So user need to explicitly set the BatchNorms to trainable.
# Below code snippet sets the BatchNorms to trainable
for layer in sim.model.layers:
if isinstance(layer, QcQuantizeWrapper) and isinstance(layer._layer_to_wrap, tf.keras.layers.BatchNormalization):
layer._layer_to_wrap.trainable = True

sim.model.fit(x=dummy_x, y=dummy_y, epochs=10)

quantize_model()

0 comments on commit 4f7bc22

Please sign in to comment.