Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TVMC] Improve --desired-layouts functionality #14272

Merged
merged 7 commits into from
Mar 16, 2023

Conversation

PhilippvK
Copy link
Contributor

this aims to make the --desired-layout argument more powerful based on the previously merged changes from #14010 (@srkreddy1238) by introducing two new features:

  1. Allow passing multiple arguments to --desired-layout instead of only one, to specify one layout per transformed operator specified in --desired-layout-ops. (Number of arguments has to bei either 1 or match the number of transformed operators)
  2. Optionally, you can now specify a non-default kernel layout as follows: NHWC:HWIO

Example Usage: tvmc compile … --desired-layout nn.max_pool2d qnn.conv2d --desired-layout-ops NCHW NHWC:HWIO

I also added unit tests for the new use-cases.

Known Limitations:

  • It would make sense to specify individual kernel layouts for regular convolutions and depthwise ones. However since both are usually implemented as generalized nn.conv2d, we can not transform them individually. Are there any good workarounds for this?
  • The arguments of --desired-layouts have previously been checked for validity during cmdline parsing (e.g. only NCHW and NHWC are allowed) which is not possible anymore. Should I add a regular expression for that?

@tvm-bot
Copy link
Collaborator

tvm-bot commented Mar 11, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@PhilippvK PhilippvK force-pushed the improve-desired-layouts branch from 7f8af41 to eaca5d5 Compare March 11, 2023 19:49
Copy link
Contributor

@lhutton1 lhutton1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @PhilippvK for exposing this functionality, overall LGTM! Just responding to your questions as well as a couple of nits:

It would make sense to specify individual kernel layouts for regular convolutions and depthwise ones. However since both are usually implemented as generalized nn.conv2d, we can not transform them individually. Are there any good workarounds for this?

One way that I can think of, although it's not very elegant, is to allow a special nn.conv2d_depthwise parameter thorough desired_layout_ops, then override the FTVMConvertOpLayout function for nn.conv2d before running the pass similar to what is done here. Perhaps we can leave as a "TODO" for now and create a tracking issue for it?

The arguments of --desired-layouts have previously been checked for validity during cmdline parsing (e.g. only NCHW and NHWC are allowed) which is not possible anymore. Should I add a regular expression for that?

I believe it should be okay to leave this, especially since this will get more complex when you consider the layouts of conv3d etc. Unsupported layouts should raise an error at a later stage while running the ConvertLayout pass anyway.

tests/python/driver/tvmc/test_transform.py Outdated Show resolved Hide resolved
python/tvm/driver/tvmc/transform.py Outdated Show resolved Hide resolved
python/tvm/driver/tvmc/transform.py Outdated Show resolved Hide resolved
@PhilippvK
Copy link
Contributor Author

One way that I can think of, although it's not very elegant, is to allow a special nn.conv2d_depthwise parameter thorough desired_layout_ops, then override the FTVMConvertOpLayout function for nn.conv2d before running the pass similar to what is done here. Perhaps we can leave as a "TODO" for now and create a tracking issue for it?

Thank you for pointing that out. If I understand it correctly, we than need to run the ConvertLayout pass twice since we would not be able to pass layouts for both types of convolutions at the same time:
1. With FTVMConvertOpLayout being a no-op for depthwise convolutions and desired layout for nn.conv2d
2. With FTVMConvertOpLayout being a no-op for non-depthwise convolutions and desired layout for nn.conv2d

I agree that this is far away from elegant and we should track this somewhere else...

@PhilippvK PhilippvK force-pushed the improve-desired-layouts branch from e0f8517 to df17c27 Compare March 15, 2023 08:18
@lhutton1
Copy link
Contributor

Thanks for the updates, LGTM after fixing the test failures :)

I believe the ConvertLayout pass would still only need to be applied once. The FTVMConvertOpLayout function can be run on each nn.conv2d the ConvertLayout pass encounters, check whether the current convolution is normal or depthwise, then use the desired layout specified for nn.conv2d_depthwise if the convolution is depthwise and the layout for nn.conv2d_depthwise has been specified.

Before passing the desired_layouts dictionary to the ConvertLayout pass itself, we can remove the special nn.conv2d_depthwise value.

Apologies for the rushed explanation, happy to help explain further in a separate conversation

Copy link
Contributor

@lhutton1 lhutton1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for the work on this @PhilippvK, I'll leave open for a day or so to give others a change to comment

@PhilippvK
Copy link
Contributor Author

@lhutton1 Should I rebase/squash the commits before merge?

@lhutton1
Copy link
Contributor

No need to rebase for now, and no need to squash the commits as this will be done automatically when merging

@lhutton1 lhutton1 merged commit f4520c4 into apache:main Mar 16, 2023
@lhutton1
Copy link
Contributor

Thanks @PhilippvK!

@PhilippvK
Copy link
Contributor Author

Perhaps we can leave as a "TODO" for now and create a tracking issue for it?

@lhutton1 Followup discussion: https://discuss.tvm.apache.org/t/convertlayout-dealing-with-depthwise-convolutions/14581

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants