-
-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow "auto" layout args for the create_compute_pipeline
#423
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! This looks good, and you also added a test 👍 I only have a small comment to cover the else clause in the triage on layout
.
As implemented in this PR, it seems to work correctly only when the group_id is 0, and otherwise, it crashes.
I'll have a look too.
It nearly works, but there is no way (that I know of) to know how many bind groups there are. @rajveermalviya this PR implements auto-layouts, it uses |
Co-authored-by: Almar Klein <almar@almarklein.org>
- always call the underlying API, instead of holding handles - it is now quite unsafe since there is no way to raise exception since wgpu-native aborts immediatly if the given index is out of range.
I've added two commits on my end. The first one is related to the implementation of One concern is the current implementation of https://www.w3.org/TR/webgpu/#dom-gpupipelinebase-getbindgrouplayout
Let me know if you have any comments or concerns. |
This reverts commit 8c61dad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this behavior is acceptable. Part of me would like that the same object is returned on multiple calls to get_bind_group_layout()
but that would complicate the code, and it indeed looks like the WebGPU spec does not require it. So let's go with this approach. For extra context, also see this comment:
wgpu-py/wgpu/backends/wgpu_native/_api.py
Lines 951 to 958 in 031a766
# Note: wgpu-core re-uses BindGroupLayouts with the same (or similar | |
# enough) descriptor. You would think that this means that the id is | |
# the same when you call wgpuDeviceCreateBindGroupLayout with the same | |
# input, but it's not. So we cannot let wgpu-native/core decide when | |
# to re-use a BindGroupLayout. I don't feel confident checking here | |
# whether a BindGroupLayout can be re-used, so we simply don't. Higher | |
# level code can sometimes make this decision because it knows the app | |
# logic. |
IMO this seems like a good feature request for the JS Spec,
Please file an issue about this over at gfx-rs/wgpu-native |
@rajveermalviya thanks for the response. With the current solution I don't think need |
7347af7
to
2d1e26c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
For the moment the panic-on-invalid-index is unfortunate, but not that much of an issue to hold off this PR.
In the mean time, I created a PR to let wgpu-native produce more friendly errors: gfx-rs/wgpu-native#320
I attempted the implementation to allow "auto" for the layout in
create_compute_pipeline
, but to be honest, I'm not entirely clear on how the behavior ofwgpuComputePipelineGetBindGroupLayout
is defined for the pipeline created with ffi.NULL as the layout args. As implemented in this PR, it seems to work correctly only when thegroup_id
is 0, and otherwise, it crashes.However, I can confirm that applying this patch enables the same syntax as the wgpu compute shader example as below.
https://github.com/gfx-rs/wgpu/blob/fd53ea90e675d94f1d79a1c3c44b2f356cecd9c5/examples/hello-compute/src/main.rs#L106-L123
I'm not sure if this patch can be merged, but I'm documenting the necessary steps here as a memo for the work required to accept "auto." If it seems like progress can be made, I'll proceed with the necessary tasks.