[Relay] Add tensor rank check for nn.instance_norm
#13280
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds a rank check for input tensor in type inference of
nn.instance_norm
. I explain the reasons as follows:First, according to the definition of Instance Normalization, it only normalizes the spatial dimensions. Therefore, the input tensor must be of at least rank 3, and otherwise the operator will not produce meaningful results.
Second,
nn.instance_norm
with tensor rank less than 2 leads to a problem in theSimplifyInference
optimization pass. This pass finds the reduced axes before converting the operator to its lower-level computation definition:tvm/src/relay/transforms/simplify_inference.cc
Lines 149 to 152 in f15afd2
If the rank of the input tensor is less than 3,
reduced_axes
is empty. According toGetRealAxis
, all the dimensions are reduced:tvm/include/tvm/topi/reduction.h
Lines 67 to 70 in f15afd2
This is problematic because we do not actually want batch and
axis
dimensions to be reduced.cc @masahi