Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TVMC] Keep quantized weights when importing PyTorch model #9417

Merged
merged 1 commit into from
Nov 2, 2021

Conversation

lhutton1
Copy link
Contributor

@lhutton1 lhutton1 commented Nov 1, 2021

BYOC requires keep_quantized_weight be set to true when converting PyTorch models using from_torch. Setting this to be True when importing a model using TVMC.

cc @leandron @masahi @ekalda

BYOC requires `keep_quantized_weight` be set to true when converting
PyTorch models using `from_torch`. Setting this to be True when using
TVMC.

Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
Copy link
Member

@masahi masahi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm glad you find this feature useful!

Copy link
Contributor

@manupak manupak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@manupak manupak merged commit d1aebcb into apache:main Nov 2, 2021
@manupak
Copy link
Contributor

manupak commented Nov 2, 2021

Thanks @masahi @lhutton1 . This is merged now.

@u99127
Copy link
Contributor

u99127 commented Nov 2, 2021

If this was merged in before the release branch was cut - can we back port this fix for 0.8 ? It seems pretty inappropriate to let something like this not get fixed in the release.

@lhutton1 lhutton1 deleted the keep-quantized-weight branch November 2, 2021 09:16
@lhutton1
Copy link
Contributor Author

lhutton1 commented Nov 2, 2021

cc @junrushao1994 would it be possible to include this fix in the v8.0 draft?

@u99127
Copy link
Contributor

u99127 commented Nov 2, 2021

Further should there be a test for this ?

@manupak
Copy link
Contributor

manupak commented Nov 2, 2021

I think "from_pytorch API" is tested seperately. We can add a test to see if PyTorchFrontend.load(...) issues the correct a call to "from_pytorch" API. @lhutton1 WDYT ?

@lhutton1
Copy link
Contributor Author

lhutton1 commented Nov 2, 2021

We can check that the output of converting a quantized model has either int8 or uint8 weights, rather than float32 - which I think would be slightly better. I did look at this, but it looks like there's issues running PyTorch TVMC tests in CI - #7455, so it will need further investigation. Edit: It looks related to #9362, so I'll look at re-enabling the test once CI is updated and adding a similar quantized version.

lhutton1 added a commit to lhutton1/tvm that referenced this pull request Nov 8, 2021
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a
test to check quantized pytorch mobilenetv2 is converted correctly.

Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
leandron pushed a commit that referenced this pull request Nov 9, 2021
As a follow up to #9417 and now that #9362 is resolved, this PR adds a
test to check quantized pytorch mobilenetv2 is converted correctly.

Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
mehrdadh pushed a commit to mehrdadh/tvm that referenced this pull request Dec 1, 2021
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a
test to check quantized pytorch mobilenetv2 is converted correctly.

Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
mehrdadh pushed a commit to mehrdadh/tvm that referenced this pull request Dec 1, 2021
BYOC requires `keep_quantized_weight` be set to true when converting
PyTorch models using `from_torch`. Setting this to be True when using
TVMC.

Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
mehrdadh pushed a commit to mehrdadh/tvm that referenced this pull request Dec 1, 2021
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a
test to check quantized pytorch mobilenetv2 is converted correctly.

Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 7, 2022
BYOC requires `keep_quantized_weight` be set to true when converting
PyTorch models using `from_torch`. Setting this to be True when using
TVMC.

Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 7, 2022
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a
test to check quantized pytorch mobilenetv2 is converted correctly.

Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
yangulei pushed a commit to yangulei/tvm that referenced this pull request Jan 11, 2022
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a
test to check quantized pytorch mobilenetv2 is converted correctly.

Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 13, 2022
BYOC requires `keep_quantized_weight` be set to true when converting
PyTorch models using `from_torch`. Setting this to be True when using
TVMC.

Change-Id: I8c183f9f802ea54d24679a4017e56481d84e5655
ylc pushed a commit to ylc/tvm that referenced this pull request Jan 13, 2022
As a follow up to apache#9417 and now that apache#9362 is resolved, this PR adds a
test to check quantized pytorch mobilenetv2 is converted correctly.

Change-Id: Iaf2d38ce71c008e0141a4a2536bd54c2c9f3fe3d
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants