-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[js/webnn] Enable user-supplied MLContext #20600
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @Galli, very good starting for WebNN I/O binding support!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally LGTM % a nit, I will test it and let you know if there's any other issues. :)
The current implementation has a few problems:
|
I have moved the context de-duplication to a singleton in C++. |
Add a few comments here: There is a new issue (#20729) reveals a clearer picture of how an actual requirement would be. Users may want to manipulate with the Considering the latest spec: https://www.w3.org/TR/webnn/#api-ml-createcontext There will be a webnn-webgpu interop and |
Since the API may accept MLContext as user input, we anyway need to pass MLContext from JS to C++. So it may be a good idea to create and manage MLContext in JS. |
This change enables the API added in microsoft#20816 as well as moving context creation to JS.
@fs-eire I have updated the PR to use the new API, PTAL. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 % a nit.
@fs-eire, gently ping. :) |
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline |
Azure Pipelines successfully started running 3 pipeline(s). |
@guschmue I have fixed the linting issues. That said, I'm concerned about adding |
/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline |
/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline |
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models |
Azure Pipelines successfully started running 3 pipeline(s). |
Azure Pipelines successfully started running 9 pipeline(s). |
/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models |
Azure Pipelines successfully started running 7 pipeline(s). |
Description
This PR enables the API added in #20816 as well as moving context creation to JS.
Motivation and Context
In order to enable I/O Binding with the upcoming MLBuffer API in the WebNN specification, we need to share the same
MLContext
across multiple sessions. This is becauseMLBuffer
s are restricted to theMLContext
where they were created. This PR enables developers to use the sameMLContext
across multiple sessions.