Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[microNPU] Add hardware constraints for binary elementwise #13772

Merged
merged 3 commits into from
Jan 18, 2023

Conversation

Aleksei-grovety
Copy link
Contributor

Does not fuse min and max operations with requantize if there are different scales as it is not supported on NPU. Since there are hardware constraints, we cannot perform min or max operation fused with requantize (please look at NPU_SET_OFM_SCALE register description https://developer.arm.com/documentation/102420/0200/Programmers-model/Command-stream/cmd1-commands-) when we have different scales.
min/max operations with matching scales are offloaded to NPU as ethosu_binary_elementwise
min/max operations with different scales are offloaded to NPU as ethosu_binary_elementwise + ethosu_identity.

cc @leandron, @ekalda, @lhutton1

@tvm-bot
Copy link
Collaborator

tvm-bot commented Jan 12, 2023

Thanks for contributing to TVM! Please refer to the contributing guidelines https://tvm.apache.org/docs/contribute/ for useful information and tips. Please request code reviews from Reviewers by @-ing them in a comment.

Generated by tvm-bot

@github-actions github-actions bot requested a review from leandron January 12, 2023 14:28
Copy link
Contributor

@ekalda ekalda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @alexey-yazev, looks good in general, some comments about the legalization tests :)

@@ -881,7 +888,7 @@ def verify(ext_func):
([1, 4, 4], [4, 1], False),
],
)
@pytest.mark.parametrize("activation_function", ["NONE", "RELU"])
@pytest.mark.parametrize("activation_function", [None, tf.nn.relu, tf.nn.relu6, relu_n1_to_1])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
@pytest.mark.parametrize("activation_function", [None, tf.nn.relu, tf.nn.relu6, relu_n1_to_1])
@pytest.mark.parametrize("activation_function", [None, tf.nn.relu])

Since this test looks at the graph entirely at Relay level, where all the RELUs are just Relay clip operations, I don't think we benefit much from extra 70 tests with just different min and max attributes to clip.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this, I will add separate test for case with MAX operation and relu_n1_to_1 activation.

Comment on lines +969 to +988
if has_separate_requantize:
# In case when requantize cannot be fused with MIN/MAX + CLIP due to hardware constraints
# there should be default quantization values since requantize is separate operation.
assert float(op.attrs.ifm_scale) == 1.0
assert int(op.attrs.ifm_zero_point) == 0
assert float(op.attrs.ifm2_scale) == 1.0
assert int(op.attrs.ifm2_zero_point) == 0
assert float(op.attrs.ofm_scale) == 1.0
assert int(op.attrs.ofm_zero_point) == 0
else:
# MIN and MAX with an activation must have a requantize operation
# baked into the output. To check the extra requantize node was
# picked up by the pattern, we can make sure the quantization
# information is not default.
assert float(op.attrs.ifm_scale) != 1.0
assert int(op.attrs.ifm_zero_point) != 0
assert float(op.attrs.ifm2_scale) != 1.0
assert int(op.attrs.ifm2_zero_point) != 0
assert float(op.attrs.ofm_scale) != 1.0
assert int(op.attrs.ofm_zero_point) != 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do both of these blocks get run? It looks like we are using the same method of generating representative dataset (which will determine the qnn params) for all the tests, so I suspect we will always create IFMs with differing qnn params and therefore test only one of the patterns here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, both of these blocks get run, the first block is run for cases with MAX operation and relu_n1_to_1 activation for example test_tflite_binary_elemwise_legalize[relu_n1_to_1-ifm_shape0-ifm2_shape0-False-MAX]

fn (%tvmgen_default_ethos_u_main_0_ifms: Tensor[(48), int8] /* ty=Tensor[(48), int8] */, Inline=1, Compiler="ethos-u", global_symbol="tvmgen_default_ethos_u_main_0", Primitive=1) -> Tensor[(1, 2, 3, 4), int8] {
  %0 = split(%tvmgen_default_ethos_u_main_0_ifms, indices_or_sections=[24]) /* ty=(Tensor[(24), int8], Tensor[(24), int8]) */;
  %1 = %0.0 /* ty=Tensor[(24), int8] */;
  %2 = %0.1 /* ty=Tensor[(24), int8] */;
  %3 = reshape(%1, newshape=[1, 2, 3, 4]) /* ty=Tensor[(1, 2, 3, 4), int8] */;
  %4 = reshape(%2, newshape=[1, 2, 3, 4]) /* ty=Tensor[(1, 2, 3, 4), int8] */;
  %5 = contrib.ethosu.binary_elementwise(%3, %4, meta[relay.Constant][0] /* ty=Tensor[(0), int8] */, operator_type="MAX", ifm_scale=1f, ifm_zero_point=0, ifm2_scale=1f, ifm2_zero_point=0, ofm_scale=1f, ofm_zero_point=0, ifm_channels=4, ifm2_channels=4, activation="CLIP", clip_min=-128, ofm_dtype="int8") /* ty=Tensor[(1, 2, 3, 4), int8] */;
  contrib.ethosu.identity(%5, meta[relay.Constant][1] /* ty=Tensor[(0), int8] */, ifm_scale=0.00783747f, ifm_zero_point=-128, ofm_scale=0.00392157f, ofm_zero_point=-128) /* ty=Tensor[(1, 2, 3, 4), int8] */
} /* ty=fn (Tensor[(48), int8]) -> Tensor[(1, 2, 3, 4), int8] */

in this cases the scales are different in others the same.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok cool, thanks for clarifying :)

Does not fuse min and max operations with requantize if there are different scales as it is not supported on NPU. Since there are hardware constraints, we cannot perform min or max operation fused with requantize (please look at NPU_SET_OFM_SCALE register description https://developer.arm.com/documentation/102420/0200/Programmers-model/Command-stream/cmd1-commands-) when we have different scales.
min/max operations with matching scales are offloaded to NPU as ethosu_binary_elementwise
min/max operations with different scales are offloaded to NPU as ethosu_binary_elementwise + ethosu_identity
@Aleksei-grovety Aleksei-grovety force-pushed the ethosu-binary-elementwise-bugfix branch from d3e9229 to c07d019 Compare January 18, 2023 07:45
Copy link
Contributor

@ekalda ekalda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @alexey-yazev, LGTM!

@ekalda ekalda merged commit 60358a1 into apache:main Jan 18, 2023
fzi-peccia pushed a commit to fzi-peccia/tvm that referenced this pull request Mar 27, 2023
)

Does not fuse min and max operations with requantize if there are different scales as it is not supported on NPU. Since there are hardware constraints, we cannot perform min or max operation fused with requantize (please look at NPU_SET_OFM_SCALE register description https://developer.arm.com/documentation/102420/0200/Programmers-model/Command-stream/cmd1-commands-) when we have different scales.
min/max operations with matching scales are offloaded to NPU as ethosu_binary_elementwise
min/max operations with different scales are offloaded to NPU as ethosu_binary_elementwise + ethosu_identity
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants