Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay] Refactor - Move infer types to a header file. #3783

Merged
merged 1 commit into from
Aug 30, 2019

Conversation

anijain2305
Copy link
Contributor

Moving the infer types to a header file. This helps in sharing infer type functions across Relay and dialects (like QNN). Current example is only for transform.h to just show the changes.

@tqchen
Copy link
Member

tqchen commented Aug 15, 2019

In these cases, perhaps it makes sense to keep the files as they are but only add declarations as internal header

@anijain2305
Copy link
Contributor Author

anijain2305 commented Aug 16, 2019

I was also thinking about it. But, we found that dialect operators can have their own new Attrs (like QnnConv2D Attrs). These infer type functions are dependent on these attrs types. This means that we have to make the infer type functions templated, while having the same default implementation for both attrs (Conv2DAttrs and QnnConv2D Attrs). In that case, it would make sense to keep the implementation in h file because of templates.

Relevant line - https://github.com/dmlc/tvm/pull/3580/files#diff-077304c74fe3bf38826eb12eb52ff29cR106 (I have also made the infer function templated there)

namespace tvm {
namespace relay {

bool CastRel(const Array<Type>& types,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if moving to header we will need to inline all these methods

@anijain2305
Copy link
Contributor Author

@tqchen let me know if you have other ideas. With templated functions, it seems we have to put the implementation and declaration at the same place.

@anijain2305
Copy link
Contributor Author

Ping @tqchen @jroesch @vinx13

@slyubomirsky
Copy link
Contributor

Perhaps I'm missing something, but if these particular functions are not templated (I couldn't see the template indicator anywhere in the file, but please correct me if I missed it), why would we want to have the implementations in the header file?

@anijain2305
Copy link
Contributor Author

Thanks @slyubomirsky for interest. The templates arise when we use Dialects and want to share the InferType functionality. This is happening in QNN dialect for example for conv2d operator. We have new Attrs - QnnConv2DAttrs that have more attributes than Conv2DAttrs. But, the infer type functionality is exactly same.

Relevant line - https://github.com/dmlc/tvm/pull/3580/files#diff-077304c74fe3bf38826eb12eb52ff29cR106 (I have also made the infer function templated there)

I have also updated this PR to show how this will look like if we go this path.

@slyubomirsky
Copy link
Contributor

Ah, thanks for the explanation.

@anijain2305
Copy link
Contributor Author

Ping folks for discussion and next action items

@anijain2305
Copy link
Contributor Author

Ping @tqchen @yzhliu @zhiics @vinx13 @slyubomirsky

Given that this has been idle for few days, I am guessing that there is an unease in making this big of a change in Relay codebase. How about an alternative then?

If we dont want to share the InferType (and possibly others in future) among Dialects, Dialect ops will have to write their own InferType functions. For QNN ops, it is a co-incidence that the inferType functionality is same. Currently, there are around 8-10 QNN ops that we are targetting. We can copy paste the infer-type for these ops from Relay to QNN and wrap them in a different function. Here, there is a code duplication, but with much less impact on Relay codebase. What do you guys think?

@shoubhik
Copy link
Contributor

Another alternative could be to move only the needed ops into the header. Lesser footprint.

@zhiics
Copy link
Member

zhiics commented Aug 23, 2019

I would think having some code duplication in this case is okay since the added code is a dialect which is a quite small subset of Relay ops. Otherwise, we may need to move all XXRel around to keep consistency.

@zhiics
Copy link
Member

zhiics commented Aug 23, 2019

This is also blocking @anijain2305's other PRs for a while, like #3819, #3736, etc. We should probably have some consensus to unblock these PRs.

@yzhliu
Copy link
Member

yzhliu commented Aug 25, 2019

+1 for @shoubhik 's suggestion. I feel whether to make InferType function templated does not necessarily to be an either/or question.

@tqchen
Copy link
Member

tqchen commented Aug 29, 2019

OK, I am fine with either way. @yzhliu @zhiics please try to summarize the discussion and suggest a path forward. Given that I am traveling recently, please feel free to take over and manage the PR

@zhiics
Copy link
Member

zhiics commented Aug 29, 2019

Summary of the discussions:

The problem is that the current implementation of InferType is in the cc file. @anijain2305 needs some of them for qnn ops. To use the same InferType function for different types, we can template them. But the default implementation should be in the header file. Or we can have specializations in different source files.

So there are two major discussions here

  • Move the code of xxRel for the targeted ops to the header file. This may introduce some confusion because some of them are in the source, but a few are in the header. However, in the future, we only need to change one place of this function (which is the default implementation). This reduces chances for bugs.

  • Have the same implementation for the interested ops under qnn namespace. This has slightly better readability, but it may introduce some extra maintenance burden for XXRels, particularly for conv2d.

While all folks in the discussion don't have strong preference between these two approaches, it looks that the first former is more favorable to more ppl here.

I would suggest we go the first approach and move forward.

@anijain2305 anijain2305 force-pushed the refactor branch 2 times, most recently from 9972be5 to 7b6bd05 Compare August 29, 2019 18:50
@zhiics
Copy link
Member

zhiics commented Aug 30, 2019

Thanks everyone. This is now merged.

@zhiics zhiics merged commit e99def2 into apache:master Aug 30, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Sep 16, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Sep 16, 2019
@anijain2305 anijain2305 deleted the refactor branch November 13, 2019 00:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants