Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Relay][Frontend] Adding ADD operator to tflite frontend for compiling the MobileNetV2 #2919

Merged
merged 6 commits into from
Apr 3, 2019

Conversation

gomida
Copy link
Contributor

@gomida gomida commented Mar 28, 2019

  • Adding ADD operator to tflite frontend for compiling the MobileNetV2
  • Adding TestCase

Copy link
Member

@FrozenGene FrozenGene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. @gomida Thanks for your contribution.

@gomida
Copy link
Contributor Author

gomida commented Apr 3, 2019

Hi @FrozenGene do I need to request review to additional member?

@FrozenGene
Copy link
Member

@srkreddy1238 Pls help to review.

Copy link
Contributor

@srkreddy1238 srkreddy1238 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM . Thanks.

@tqchen tqchen merged commit e68874d into apache:master Apr 3, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Apr 7, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Apr 7, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Apr 8, 2019
wweic pushed a commit to wweic/tvm that referenced this pull request Apr 10, 2019
wweic pushed a commit to neo-ai/tvm that referenced this pull request Apr 11, 2019
@FrozenGene
Copy link
Member

@gomida Sorry to disturb you. However, maybe I find one potential issue we didn't consider in code. Let us imagine the LHS shape is not the same as RHS shape. For example, TFLite input layout is [1, 16, 32, 180], RHS shape is [180]. Then we should got [1, 16, 32, 180]. However, we transpose input layout be [1, 180, 16, 32], then RHS shape is [180]. In code, we just pass this condition. So we gout [1, 180, 16, 32] + [180], which couldn't do broadcast.

One way to solve is:

  • Transpose LHS be NHWC
  • Keep RHS be the same
  • Call _op.add
  • Transpose back to NCHW.

My origin plan is to support TFLite NHWC data layout as #2519 described after my quantization part is upstreamed done. However, if you are interested to support it , also welcome. It shouldn't be difficult.

@gomida
Copy link
Contributor Author

gomida commented Apr 15, 2019

@FrozenGene sure, I'll look into the issue :)

@gomida
Copy link
Contributor Author

gomida commented Apr 17, 2019

@FrozenGene during test, I found some more problems. We've assumed that the LHS as tensor, but in some cases, the LHS may not be a tensor (in this case the RHS is a tensor). This requires some more complex handling. On the other hand, changing the layout to the original NHWC will add many transpose layers and make it as slow as the imported TF models - not an easy decision because we have no "automatic layout conversion pass" mentioned in #2519 currently.

@FrozenGene
Copy link
Member

@gomida Could you give example / model when we have the case of LHS is constant values? During model testing, I only meet LHS is tensor and RHS is constant condition.

The performance shouldn't be problem. Because we will fuse these transpose ops and the add / transpose ops run very fast, and the model will not have many add ops.

Additionally, I plan to support TFLite NHWC data layout and implement spatial pack NHWC schedule on ARM CPU firstly after my quantization part is upstreamed. Currently, we don't have spatial pack NHWC optimization on ARM CPU, which is the main platform we will run TFLite model . I don't know whether you are interested in supporting NHWC layout of TFLite, if you do, welcome to open RFC issue and contribute.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants