-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify the restriction for minValue
and maxValue
of MLClampOptions
#396
Comments
update 2024-04-18 - all known libraries now appear to accept this correctly tldr: All other tested ML libraries handle empty ranges properly as expected (e.g.
For inverted ranges though ( Library results for inverted rangesGiven these inverted ranges:
Yields:
You see 4 different equivalence classes of behavior here:
NumPy (https://numpy.org/doc/stable/reference/generated/numpy.clip.html)
TensorFlow (https://www.tensorflow.org/api_docs/python/tf/clip_by_value)
PyTorch (https://pytorch.org/docs/stable/generated/torch.clamp.html)
ONNX (https://github.com/onnx/onnx/blob/main/docs/Operators.md#Clip)
DML EP passes. CPU EP fails. C++ (https://en.cppreference.com/w/cpp/algorithm/clamp)
Prints:
DirectML (DML_OPERATOR_ELEMENT_WISE_CLIP)Equivalent to
|
As a backend API, I think WebNN should not allow a corner case situation that makes it hard for a specific implementation to follow thru e.g. if WebNN supports a degenerate range min == max while XNNPACK can't implement them, that is a burden on the implementer, one that could lead to a corner case latent bug that no one knows about until it hits a customer. Even for the implementation that does support it, it is still likely that they may do so differently from one another depending on their specific policy or other ambiguous circumstances i.e. should the behavior be a no-op, a min filling, or else? Therefore, in this situation I think it is better to simply assert that min < max, and not min <= max. If the framework on top wants to support a degenerate range, it should be pretty straightforward for them to assert their own policy there independent of WebNN or other backend that they support. As to whether we should also support inverted range, my rationale is similar to the degenerate range case, it is non-obvious and supporting non-obvious case could lead to a bug-compatibility burden on the implementation side in the future. |
XNNPACK allows min and max to be the same: google/XNNPACK@d98c1ff (Thanks @fdwr for the notice) |
- Documents the convention. - Dedupes `MLEluOptions`'s `alpha` definitions - Dedupes `MLClampOptions`'s `minValue` and `maxValue`, and references webmachinelearning#396 - Moves `MLOperandDescriptor` member documentation out of IDL comments - Introduces simple definitions for `MLComputeResult`'s `inputs` and `outputs` - Converts "device type" and "power preference" into definitions for `MLContextOptions`'s `deviceType` and `powerPreference`. - Converts "device type" into MLContextOptions/deviceType definition Also includes these adjacent changes to improve the document flow: - Moves the "context type" definition into the MLContext section. - Moves the Permission Policy Integration section from API into Programming Model which seems like a slightly better home for it. Fixes webmachinelearning#483
- Documents the convention. - Dedupes `MLEluOptions`'s `alpha` definitions - Dedupes `MLClampOptions`'s `minValue` and `maxValue`, and references webmachinelearning#396 - Moves `MLOperandDescriptor` member documentation out of IDL comments - Introduces simple definitions for `MLComputeResult`'s `inputs` and `outputs` - Converts "device type" and "power preference" into definitions for `MLContextOptions`'s `deviceType` and `powerPreference`. Also includes these adjacent changes to improve the document flow: - Moves the "context type" definition into the `MLContext` section. - Moves the Permission Policy Integration section from API into Programming Model which seems like a slightly better home for it. Fixes webmachinelearning#483
* Conventions: Ensure all dict members have definitions - Documents the convention. - Dedupes `MLEluOptions`'s `alpha` definitions - Dedupes `MLClampOptions`'s `minValue` and `maxValue`, and references #396 - Moves `MLOperandDescriptor` member documentation out of IDL comments - Introduces simple definitions for `MLComputeResult`'s `inputs` and `outputs` - Converts "device type" and "power preference" into definitions for `MLContextOptions`'s `deviceType` and `powerPreference`. Also includes these adjacent changes to improve the document flow: - Moves the "context type" definition into the `MLContext` section. - Moves the Permission Policy Integration section from API into Programming Model which seems like a slightly better home for it. Fixes #483 * Add subsection for MLContextOptions * Feedback from @huningxin
Hey @fdwr - is there confusion about TF.js here or am I reading your comment wrong? TF's test ( I believe XNNPACK was the only outlier and that's been fixed, so I think we're good to resolve this and remove the note in the spec referencing this issue. Can you confirm, or clarify the remaining concern? |
@inexorabletash Indeed, TF.js looks good too. So that's all of them then, and the note should be removed (I can tomorrow). Updated the table above. |
Per discussion in the issue, now that XNNPACK is updated there are no backends that disallow minValue == maxValue, so the issue can be resolved and spec note can be removed. Resolves webmachinelearning#396
WebNN
clamp
limits the input tensor element-wise within a range specified by the minimum and maximum values. The minimum and maximum values could be specified byMLClampOptions
.This open item was raised by @quidity (Thanks Alex!) in Chromium CL review which is
The current Chromium implementation restricts the min value should be less than or equal to the max value. TensorFlow.js also implements this restriction. However, some backends implement more strict restriction that the min value should be less than max value, like XNNPACK.
WebNN spec should clarify the restriction.
/cc @fdwr, for inputs on DirectML .
The text was updated successfully, but these errors were encountered: