-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TVMC] Keep quantized weights when importing PyTorch model #9417
Conversation
BYOC requires `keep_quantized_weight` be set to true when converting PyTorch models using `from_torch`. Setting this to be True when using TVMC. Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm glad you find this feature useful!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
If this was merged in before the release branch was cut - can we back port this fix for 0.8 ? It seems pretty inappropriate to let something like this not get fixed in the release. |
cc @junrushao1994 would it be possible to include this fix in the v8.0 draft? |
Further should there be a test for this ? |
I think "from_pytorch API" is tested seperately. We can add a test to see if PyTorchFrontend.load(...) issues the correct a call to "from_pytorch" API. @lhutton1 WDYT ? |
We can check that the output of converting a quantized model has either |
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a test to check quantized pytorch mobilenetv2 is converted correctly. Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a test to check quantized pytorch mobilenetv2 is converted correctly. Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
BYOC requires `keep_quantized_weight` be set to true when converting PyTorch models using `from_torch`. Setting this to be True when using TVMC. Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a test to check quantized pytorch mobilenetv2 is converted correctly. Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
BYOC requires `keep_quantized_weight` be set to true when converting PyTorch models using `from_torch`. Setting this to be True when using TVMC. Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a test to check quantized pytorch mobilenetv2 is converted correctly. Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a test to check quantized pytorch mobilenetv2 is converted correctly. Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
BYOC requires `keep_quantized_weight` be set to true when converting PyTorch models using `from_torch`. Setting this to be True when using TVMC. Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a test to check quantized pytorch mobilenetv2 is converted correctly. Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
BYOC requires
keep_quantized_weight
be set to true when converting PyTorch models usingfrom_torch
. Setting this to be True when importing a model using TVMC.cc @leandron @masahi @ekalda