Skip to content

Commit

Permalink
[TVMC] Keep quantized weights when importing PyTorch model (apache#9417)
Browse files Browse the repository at this point in the history
BYOC requires `keep_quantized_weight` be set to true when converting
PyTorch models using `from_torch`. Setting this to be True when using
TVMC.

Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
  • Loading branch information
lhutton1 authored and mehrdadh committed Dec 1, 2021
1 parent 5d57ab7 commit 0adf6e6
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion python/tvm/driver/tvmc/frontends.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,9 @@ def load(self, path, shape_dict=None, **kwargs):
input_shapes = list(shape_dict.items())

logger.debug("parse Torch model and convert into Relay computation graph")
return relay.frontend.from_pytorch(traced_model, input_shapes, **kwargs)
return relay.frontend.from_pytorch(
traced_model, input_shapes, keep_quantized_weight=True, **kwargs
)


class PaddleFrontend(Frontend):
Expand Down

0 comments on commit 0adf6e6

Please sign in to comment.