Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify the operand layout support of conv2d and pooling 2d operations #324

Open
huningxin opened this issue Jan 9, 2023 · 13 comments
Open

Comments

@huningxin
Copy link
Contributor

In the existing WebNN spec, conv2d supports two input operand layouts defined by MLInputOperandLayout and four filter operand layouts defined by MLConv2dFilterOperandLayout.

enum MLInputOperandLayout {
  "nchw",
  "nhwc"
};

enum MLConv2dFilterOperandLayout {
  "oihw",
  "hwio",
  "ohwi",
  "ihwo"
};

This may make the implementation more complicated especially if a native ML framework or OS API doesn't support some of these layouts. If one layout is unsupported, the implementation may need to insert the transpose operations into the graph around the conv2d operation that transposes the unsupported layout to supported one. This would easily lead to an inefficient graph representation that may have redundant transpose operations. Or the implementation may need to optimize the graph by techniques such as "transpose sink" which may require a more complex implementation. This issue was raised in Chromium CL review.

To simplify the implementation, the proposal is to reduce the supported operand layouts, for example, just keep the default one. Because WebNN supports transpose operation, the layout adaption and graph level optimization can be handled by ML frameworks that usually already support such functionalities.

Thanks @wacky6 for this idea.

aarongable pushed a commit to chromium/chromium that referenced this issue Jan 17, 2023
This CL implements the DefineXnnNodeForConv2d() method for WebNN conv2d
MLOperator that defines XNNPACK conv2d or depthwise conv2d Node
according to MLConv2dOptions.

This CL only supports the input layout “nhwc”, filter layout “ohwi” for
regular conv2d and filter layout “ihow” for depthwise conv2d. Other
input and filter layouts could be supported by inserting transpose
operators later. There is also another proposal [1] that suggests
simplifying the layouts support. The implementation will be updated
according to the WG’s consensus.

For unit tests, this CL implements Conv2dTest of MLGraphXnnpackTest that
tests against conv2d, depthwise conv2d with and without fused bias
operand and relu activation.

[1]: webmachinelearning/webnn#324


Bug: 1273291
Change-Id: I8a70ee7bb053b386e12ff46e67d139683b044383
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/4031876
Reviewed-by: Jiewei Qian <qjw@chromium.org>
Commit-Queue: ningxin hu <ningxin.hu@intel.com>
Cr-Commit-Position: refs/heads/main@{#1093328}
aarongable pushed a commit to chromium/chromium that referenced this issue Jan 24, 2023
This CL implements DefineXnnNodeForPool2d() method that defines XNNPACK
pooling Nodes for averagePool2d MLOperator by
xnn_define_average_pooling_2d() or
xnn_define_global_average_pooling_2d(), and maxPool2d MLOperator by
xnn_define_max_pooling_2d().

Similar to conv2d, this CL only supports the “nhwc” input layout. The
“nchw” input layout could be supported by inserting transpose operators
later. There is a proposal [1] that suggests simplifying the layout's
support. The implementation will be updated according to the WG’s
consensus.

To consolidate the padding sizes setting and calculation, this CL also
introduces the GetXnnPadding2D() helper that is shared by both XNNPACK
convolution 2d and pooling 2d Nodes.

For unit tests, this CL implements the Pool2dTest of MLGraphXnnpackTest
that tests both kinds of pooling operators, including the global average
pooling variant.

[1]: webmachinelearning/webnn#324

Bug: 1273291
Change-Id: I16ac7260b96762078f6fb00997504e7ef32067da
Reviewed-on: https://chromium-review.googlesource.com/c/chromium/src/+/4031307
Commit-Queue: ningxin hu <ningxin.hu@intel.com>
Reviewed-by: Jiewei Qian <qjw@chromium.org>
Cr-Commit-Position: refs/heads/main@{#1095962}
@anssiko
Copy link
Member

anssiko commented Mar 23, 2023

This issue was discussed at the WebML WG Teleconference – 16 March 2023. Summary: Awaits further implementation feedback.

@fdwr
Copy link
Collaborator

fdwr commented Mar 23, 2023

Picking just one preferred layout in WebNN could make life easier for the calling framework and the underlying backend implementation, or it could make it harder for both:

  • If WebNN supports both layouts, then the calling framework can trivially pass on the data as-is. The backend can then bracket affected operators with transposes upon entry/exit if needed (note some backends support both layouts, and so there is no additional work there), but since adjacent transposes cancel out anyway (similar to how bracketed quantize/dequantize operators wrapping another operator cancel out), I wager that most internal transposes will disappear, leaving potentially just transposes on the graph edges in/out. The backend has full graph visibility by build() time and should be able to see and collapse any such adjacent transposes, and only the backend has enough information to be able to performantly select the right approach.
  • If WebNN removes support and only accepts one preferred layout, then calling frameworks with different existing conventions will need to insert the transposes before calling WebNN; and then lower-level backends (if their convention differs from WebNN's preferred) may also need to either strip out any unnecessary transposes or conversely add transposes for performance.

I prefer accepting both (keeping the current spec), but it would be informative to see a holistic table each major framework's preferred layout and each backend's preferred layout.

[updated...] Table added (✅ == default):

API NCHW NHWC Notes
CoreML NCHW✅ - -
ONNX NCHW✅ - -
PyTorch NCHW✅ NHWC NCHW is default, but NHWC is supported more recently via torch.memory_format.channels_last
TensorFlow & Tensorflow.js & TFLite NCHW NHWC✅ data_format defaults to NHWC or channelsLast.
TFLite - NHWC✅ -
Intel OneDNN dnnl_format_tag_t NCHW✅ NHWC "NCHW is the recommended data layout"
CuDNN cudnnTensorFormat_t NCHW✅ NHWC NCHW is default order for cudnnSetTensor4dDescriptor
DirectML NCHW✅ NHWC NHWC is default, but NHWC is supported via explicit strides
XNNPack - NHWC✅ -
NVIDIA tensor cores NCHW NHWC✅ "Tensor Cores are fastest when input tensors are laid out in NHWC ... NCHW layouts can still be operated on by Tensor Cores, but include some overhead due to automatic transpose operations"

@anssiko
Copy link
Member

anssiko commented Mar 24, 2023

@fdwr thanks for sharing your preference and the supporting details.

As an aside, I encourage incorporating considerations such as this into specification informatively alongside the normative prose. It helps explain the specification to people who look at it without the full context active WG participants have.

@wacky6
Copy link

wacky6 commented Apr 11, 2023

Layout support comes up in MLOperand implementation that allows data shape broadcasting. https://chromium-review.googlesource.com/c/chromium/src/+/4396686/comment/f02acaeb_3c2795f2/

Supporting both channel-first and channel-last layout will complicate spec steps and implementation because the current numpy broadcast rule implies right-most first broadcast.

Example: caller wants to apply a per-channel multiplication.

  1. lhs is nchw{1,3,4,4}; caller provides rhs {3}. This will fail under the current broadcast rule. The caller will need to broadcast rhs itself.
  2. lhs is nhwc{1,4,4,3}; caller provides rhs {3}. Works as intended.

How to support case 1 isn't clear. Some questions might help decision:

  • How does different backend implement broadcast? Do they silently create a broadcasted array and store it in memory before computation; Or do they store the original array then broadcast it on-the-fly during computation.
  • What's the cost of doing smart broadcast based on operand's layout? Or an option attribute that specifies broadcase order (channel-first, channel-last) where broadcast operations are applicable.
  • What's the cost (overhead) vs. benefit (simple spec and implementation) of transposing?

I have a slight preference for supporting only one layout (NHWC to be precise).

  • The obvious benefit: the spec and implementation will be much simpler. This is benefitial for browser vendor adoption (i.e. implement the API, make sure it conforms to the spec, less likely to produce bugs).
  • API user benefit: a consistent data layout usage. One less thing to check for debugging ("layout + data_shape" becomes "data_shape only").
  • Interop with other Web APIs: Existing Web APIs that deals with images uses channel-last ordering (like ImageData. So using NHWC for WebNN provides better interop (pass the data as-is, no conversion needed for developers).
  • I agree that if we choose only one layout the caller would need to convert. I think the most overhead incur at build() time (one-off cost), and a very small over head incur at compute() (convert to the right layout before passing data to backend, from result to what's defined in spec, probably negligible?).

guschmue pushed a commit to microsoft/onnxruntime that referenced this issue May 9, 2023
**Description**: 

This PR intends to enable WebNN EP in ONNX Runtime Web. It translates
the ONNX nodes by [WebNN
API](https://webmachinelearning.github.io/webnn/), which is implemented
in C++ and uses Emscripten [Embind
API](https://emscripten.org/docs/porting/connecting_cpp_and_javascript/embind.html#).
Temporarily using preferred layout **NHWC** for WebNN graph partitions
since the restriction in WebNN XNNPack backend implementation and the
ongoing
[discussion](webmachinelearning/webnn#324) in
WebNN spec that whether WebNN should support both 'NHWC' and 'NCHW'
layouts. No WebNN native EP, only for Web.

**Motivation and Context**:
Allow ONNXRuntime Web developers to access WebNN API to benefit from
hardware acceleration.

**WebNN API Implementation Status in Chromium**:
- Tracked in Chromium issue:
[#1273291](https://bugs.chromium.org/p/chromium/issues/detail?id=1273291)
- **CPU device**: based on XNNPack backend, and had been available on
Chrome Canary M112 behind "#enable-experimental-web-platform-features"
flag for Windows and Linux platforms. Further implementation for more
ops is ongoing.
- **GPU device**: based on DML, implementation is ongoing.

**Open**:
- GitHub CI: WebNN currently is only available on Chrome Canary/Dev with
XNNPack backend for Linux and Windows. This is an open to reviewers to
help identify which GitHub CI should involved the WebNN EP and guide me
to enable it. Thanks!
prathikr pushed a commit to microsoft/onnxruntime that referenced this issue May 16, 2023
**Description**: 

This PR intends to enable WebNN EP in ONNX Runtime Web. It translates
the ONNX nodes by [WebNN
API](https://webmachinelearning.github.io/webnn/), which is implemented
in C++ and uses Emscripten [Embind
API](https://emscripten.org/docs/porting/connecting_cpp_and_javascript/embind.html#).
Temporarily using preferred layout **NHWC** for WebNN graph partitions
since the restriction in WebNN XNNPack backend implementation and the
ongoing
[discussion](webmachinelearning/webnn#324) in
WebNN spec that whether WebNN should support both 'NHWC' and 'NCHW'
layouts. No WebNN native EP, only for Web.

**Motivation and Context**:
Allow ONNXRuntime Web developers to access WebNN API to benefit from
hardware acceleration.

**WebNN API Implementation Status in Chromium**:
- Tracked in Chromium issue:
[#1273291](https://bugs.chromium.org/p/chromium/issues/detail?id=1273291)
- **CPU device**: based on XNNPack backend, and had been available on
Chrome Canary M112 behind "#enable-experimental-web-platform-features"
flag for Windows and Linux platforms. Further implementation for more
ops is ongoing.
- **GPU device**: based on DML, implementation is ongoing.

**Open**:
- GitHub CI: WebNN currently is only available on Chrome Canary/Dev with
XNNPack backend for Linux and Windows. This is an open to reviewers to
help identify which GitHub CI should involved the WebNN EP and guide me
to enable it. Thanks!
@wacky6
Copy link

wacky6 commented Jul 20, 2023

I want to share a data point.

I was playing with Real-ESRGAN today, and found out that torch.compile channel_last layout is faster than torch.compile channel_first layout on my NVIDIA A4000.

I'm not sure how well this transfer to other models (ESRGAN is heavily based on CNN + residual connection) though.

I wonder if we should benchmark on channel ordering on different hardware (i.e. vendor other than NVIDIA could optimize for channel_first).

Or maybe this won't matter if graph builder (or rather optimizer) is "clever" enough.

@huningxin
Copy link
Contributor Author

There is a security perspective from @quidity (Thanks Alex!) in Chromium CL-4653303: WebNN: Define conv2d operator in mojo review.

Alex mentioned:

enum Conv2dFilterOperandLayout { kOihw, kHwio, kOhwi, kIhwo };
this feels very error prone - is there a better way to represent the layout of the data at this stage or restrict the ways that is presented to the privileged process?

@wacky6
Copy link

wacky6 commented Aug 15, 2023

FWIW, another way to tackle layout is to tell the implementation which layout should be used, like: https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html

This could be a hint to GraphBuilder.build() (right before producing a graph that can be passed to compute()).

--

Taking a step back, I still strongly prefer a single unified layout (i.e. NHWC) that's applied throughout MLGraphBuilder methods, and let the backend (e.g. DMLImpl) change the layout (if necessary) before sending to hardware.

@zolkis
Copy link
Collaborator

zolkis commented Aug 15, 2023

From the end users' perspective, I sympathize with the single default layout idea, selecting the most supported one in the industry, and let the back-end make any changes needed, making layout an implementation detail. End users might need to convert layout in a few corner cases.

However, from an API design standpoint, there is also the question what the clients of this API will want to control, i.e. what the API should expose.

A user-facing API is more free to make simplifications by conventions or by re-framing the user interaction.
A lower level API, such as this, should be simple and transparent without added "smartness", therefore on the long term it might be better to support the union of the use cases.

In the comment above there are arguments the single default layout might also simplify the usage of the API.

When not sure, in Web APIs usually it is a good practice to start with the simpler API and then extend it on need, making sure extensibility is possible by design, i.e. no API breaks.

@wchao1115
Copy link
Collaborator

wchao1115 commented Aug 23, 2023

The design intent of WebNN as a backend API prioritizes completeness, efficiency, and expressiveness over ease of use. For instance, automatic shape inference is not supported as it was assumed to be the responsibility of the calling framework or app calling into WebNN directly. This limitation while not as easy to use allows the API to be more flexible and adaptable to different framework policies.

I agree with the premise that having excessive layout options makes the API harder to implement. I think reducing the filter layout options MLConv2dFilterOperandLayout to just the first two options "oihw" and "hwio" makes sense. The first is the default filter layout of torch and ONNX, while the second is of TensorFlow.

However, a trickier conversation is on the input layout. Interestingly, the first option "nchw" is also the default in torch and ONNX while the second option "nhwc" is supported natively in TensorFlow, historically influenced by the design of NVIDIA Tensor Core's FP16 native layout in 2017, started with the Volta generation -- the Titan-V. It isn't a mainstream layout supported by all other vendors, just a very common one with NVIDIA GPU over FP16 tensor data type.

There are TensorFlow models nowadays that still rely on NHWC input layout. These models once converted to the other format often results in each conv2d layer bracketed by a pair of transposes, a superfluous outcome at first glance, but easily collapsible by an intelligent backend later on. On the other hand, allowing NHWC layout to populate down through the graph, could potentially push the layout mismatches down further in the stack and makes it harder for the implementer to detect and optimize the unneeded double transposes away.

I support the removal of the input layout enum MLInputOperandLayout by assuming the input layout to always be NCHW. WebNN already supports the Transpose operator whenever the layout change is required as needed by a layout converter.

inexorabletash added a commit to inexorabletash/webnn that referenced this issue Jan 26, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

For webmachinelearning#324, webmachinelearning#462, and potentially webmachinelearning#523.
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Jan 26, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Jan 27, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Jan 27, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Feb 5, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Feb 6, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Feb 7, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Feb 12, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

Use broadcasting definition in expand(), rather than bespoke steps

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.

Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>
inexorabletash added a commit to inexorabletash/webnn that referenced this issue Feb 12, 2024
This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

Use broadcasting definition in expand(), rather than bespoke steps

For webmachinelearning#324, webmachinelearning#378, webmachinelearning#462, and potentially webmachinelearning#523.

Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>
fdwr added a commit that referenced this issue Feb 13, 2024
* New content: Add definition for shape broadcasting

This change introduces a new section for Algorithms, following APIs,
to collect algorithms referenced throughout the specification.

A section for Broadcasting is introduced, which defines broadcasting
shapes and gives an explicit algorithm matching WebNN implementations
of NumPy's General Broadcasting Rules. Definitions for "broadcastable"
and "unidirectionally broadcastable" are introduced. The previous
definition of "broadcast-shapes" is removed in favor of these new
algorithms.

Use broadcasting definition in expand(), rather than bespoke steps

For #324, #378, #462, and potentially #523.

Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>

* Fix prelu parameter order

---------

Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>
@huningxin
Copy link
Contributor Author

A summary of Chromium prototyping:

  • Expose the backend preferred layout through MLOpSupportLimits.preferredInputLayout
  • Insert transpose operators that can convert user supplied layout to backend supported layout if needed

@inexorabletash
Copy link
Member

Discussion in WebML WG meeting at TPAC, resolved to close. https://www.w3.org/2024/09/23-webmachinelearning-minutes.html#t05

@inexorabletash
Copy link
Member

I don't believe any spec updates are needed here, so closing, but please re-open if I missed something @huningxin !

@fdwr
Copy link
Collaborator

fdwr commented Sep 23, 2024

I don't believe any spec updates are needed here

We need to remove pooling's rounding direction for the output dimensions and just use outputSizes, consistent with conv2d.

dictionary MLPool2dOptions : MLOperatorOptions {
    sequence<[EnforceRange] unsigned long> windowDimensions;
    sequence<[EnforceRange] unsigned long> padding;
    sequence<[EnforceRange] unsigned long> strides;
    sequence<[EnforceRange] unsigned long> dilations;
    MLInputOperandLayout layout = "nchw";
-   MLRoundingType roundingType = "floor";
    sequence<[EnforceRange] unsigned long> outputSizes;
};

https://www.w3.org/TR/webnn/#api-mlgraphbuilder-pool2d-average

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants