-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CommandBuffer usage clarification: internal, external, both? #264
Comments
Thanks for this feedback, @bbernhar. We discussed this issue on our call:
@RafaelCintron and @huningxin for further comments. |
Thanks @anssiko for the update. But I was seeking for clarification on the usage of I believe this is a matter for the WebNN WG to decide, as |
@bbernhar thank you for this additional context. I amended the 19 May 2022 agenda with this issue. |
It isn't |
@bbernhar we discussed this issue today with help of your additional context. @wchao1115 provided his perspective, and I'm sure he is available to answer any questions you may have in this issue. |
@wchao1115 Command buffers are always immutable once created, that's not my question. Immutability vs usage External-only means WebNN is unable to synchronize access and any GPU operation (performed by WebNN) is forbidden once created, either by through CPU or by another GPU operation. Internal-only, is the reverse, accessing the |
I would expect the |
Let's assume they are "compatible" and we have some buffer or texture referenced within the Are you telling me "the contract" requires WebNN to go into "read only" mode (ie. no more GPU for you) until WebGPU finishes execution? I believe a lot more clarification here is needed. |
Here's the relevant excerpt in the spec:
|
I understand. But my point was the same ML resource could be referenced through a WebNN produced Since WebNN has no way of knowing what resource state WebGPU execution needs and vise-versa, WebNN A contract by interface isn't sufficient here, WebNN needs well-defined resource usage rules too. |
In the latest version of the spec, the |
Unfortunately, no. WebGPU could put the GPU resource in a different state, for another The problem is by sharing a The only way I can think of solving this is by defining resource usage rules in WebNN, and introducing |
I think what Ningxin means by:
In response to the issue you raised:
is that He does not comment on the interop design, simply responds to what you raised as irrelevant to the interop discussion. That, by the way, is also my response earlier. |
WebNN cannot produce a command buffer without specifying the rules of resource usage and also remove itself from providing the responsibility (WebNN) to define AND enforce them somewhere. So if Edit: might help to have an example.
|
Thanks @wchao1115 for your comments. It clarifies my response regarding to
I think we need to solve this problem.
I am not sure whether we need to define As my understanding, the computational workload of a compiled
Besides the resource rules, we probably also need to clarify the tensor data layout in GPU resources, especially for For example, in the WebNN/WebGPU video background blur sample, I implemented a WebGPU compute shader that preprocesses the input |
We could "map" resource usages between WebGPU <=> WebNN (ex. input binding == R/O uniform) and also define new texture layouts (+conversions via a "fat shader") for WebNN. But the underlying issue is still there, resource states need to be correctly synchronized between WebNN and WebGPU or they will stomp over each other. Your example works because it does not execute GPU resources outside of WebGPU, it has internal-only usage. The shared video resource being imported into WebGPU, was made mutually exclusive upon ImportExternal. WebGPU "rents" the video texture from video but under the strict condition it is never a normal GPU texture. |
It's my understanding that this proposal has been superseded by |
@a-sully SGTM. |
If the intent of WebNN is to produce an immutable
MLCommandBuffer
that can be read-only by WebGPU, then I would suggest we consider 1) renaming it toMLExternalCommandBuffer
and 2) avoid overloading WebGPU Interop with requirements WebGPU does not follow: a command buffer with GPU commands being equal to a command buffer with non-GPU commands - by movingMLExternalCommandBuffer
into a WebNN Interop section.Alternatively, we could keep
MLCommandBuffer
(internal usage) but allow WebNN a means to submit (excomputeAsync
). This would avoid breaking WebGPU and allow WebNN to have consistent GPU async support experience on both native (standalone) and web.Thoughts?
@huningxin @wchao1115 @RafaelCintron
The text was updated successfully, but these errors were encountered: