Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[webnn] Add float32 tests for WebNN API instanceNormalization op #43891

Merged
merged 2 commits into from
Jan 25, 2024

Conversation

BruceDai
Copy link
Contributor

@BruceDai BruceDai commented Jan 9, 2024


// https://webmachinelearning.github.io/webnn/#api-mlgraphbuilder-instancenorm

testWebNNOperation('instanceNormalization', buildLayerNorm, 'gpu');
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here buildLayerNorm function of #43684 can be reused.

@@ -328,6 +328,7 @@ const PrecisionMetrics = {
elu: {ULP: {float32: 18, float16: 18}},
expand: {ULP: {float32: 0, float16: 0}},
gemm: {ULP: {float32: getGemmPrecisionTolerance, float16: getGemmPrecisionTolerance}},
instanceNormalization: {ATOL: {float32: 1/1024, float16: 1/512}},
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This's a placeholder. I tried to test by ULP with input [2, 3, 30, 30] and got the maximums ULP data is about 388. By this ATOL tolerance, tests could run pass. @fdwr Please help provide this tolerance criteria when you're available, thanks.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The division by square root really complicates giving a straight forward answer ((a - mean) * scale / sqrt(variance + epsilon) + bias). Just based on the number of lossy operations, you'd expect a worst case of about...

6 + mean cost + variance cost
mean cost = ElementsToReduce + 1
variance cost = (Input.Sizes.W * Input.Sizes.H) * 3 + 1

...but because the subtraction can produce nearly equal numbers, and the division magnifies differences, we get a larger delta than that. Comfortable tolerant values for many GPU's would be ~840 ULP for float32 and 8'400 ULP for float16.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fdwr Thanks!
I've updated tolerance as your recommended, please take another look, ☕

}
},
{
"name": "instanceNormalization float32 4D tensor options.scale",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's similar issue "NotSupportedError: DirectML: instanceNormalization: The scale and bias must be both given or not given." like layerNormalization .

@BruceDai
Copy link
Contributor Author

I've rebased to the latest master code, PTAL, Thanks.

Copy link
Contributor

@Honry Honry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@Honry Honry merged commit d9651c2 into web-platform-tests:master Jan 25, 2024
19 checks passed
marcoscaceres pushed a commit that referenced this pull request Feb 23, 2024
)

* [webnn] Add float32 tests for WebNN API instanceNormalization op

* [webnn] Update tolerance for instanceNormalization op
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants