Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run semantic segmentation sample on full GPU pipeline #124

Open
huningxin opened this issue Jan 19, 2022 · 2 comments
Open

Run semantic segmentation sample on full GPU pipeline #124

huningxin opened this issue Jan 19, 2022 · 2 comments

Comments

@huningxin
Copy link
Contributor

huningxin commented Jan 19, 2022

This is required by webmachinelearning/webnn#226

To implement this pipeline, the semantic segmentation sample probably could execute following steps:

  1. Use mediacapture-transform to capture the VideoFrame
  2. Import the VideoFrame into a WebGPU texture (GPUExternalTexture)
  3. Execute WebGPU shader to do pre-processing and convert the GPUExternalTexture to a GPUBuffer
  4. Compute the MLGraph with converted GPUBuffer and the output is another GPUBuffer
  5. Execute WebGPU shader to do post-processing on the output GPUBuffer and render the result to a canvas.
  6. Create a VideoFrame from the CanvasImageSource and enqueue to the mediacapture-transform API's controller.
@huningxin
Copy link
Contributor Author

Related to webmachinelearning/webnn-polyfill#156

@huningxin
Copy link
Contributor Author

depends on w3c/webcodecs#412

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant