-
Notifications
You must be signed in to change notification settings - Fork 952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose maximum_frame_latency #4899
Expose maximum_frame_latency #4899
Conversation
I just want to emphasize what a huge difference changing going from 3 to 2 is on Mac, at least in windowed mode on a 60 Hz monitor. It takes the application from feeling sluggish to feeling responsive (using When the CPU is fast (e.g. producing a frame in 1ms), Interestingly though, when the CPU is too slow (e.g. taking 20ms to prepare a frame), then So: I'm planning on making |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Marking request review per discussion in call
On most platforms this is merely a change in narrative, on DX12 it actually has a practical effect already since we use it to directly set frame latency
code looks mostly now like we want it to be I think! For sure way more expressive of the stated goal and in close alignment with Need to update changelog and test whether |
Got around to test with Vulkan on windows of when GPU/CPU ending up to working in sync. On DX12 I'm hitting some odd bug making frame time expand to 1 full second, so I don't have test results from that so far. Not the result I was hoping for. Need to repeat the same on Mac to better understand stuff. |
Sounds like you might be hitting |
Same behavior on Metal. The thing I don't get yet is how one |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets party!
Setting `desired_maximum_frame_latency` to a low value should theoretically lead to lower latency in winit apps using `egui-wgpu` (e.g. in `eframe` with `wgpu` backend). * Replaces #3714 * See also gfx-rs/wgpu#4899 ---- It seems like `desired_maximum_frame_latency` has no effect on my Mac. I lowered my monitor refresh-rate to 30Hz to test, and can see no difference between `desired_maximum_frame_latency` of `0` or `3`. Before when experimenting with changing the global `DESIRED_NUM_FRAMES` in `wgpu` I saw a huge difference, so I wonder what has changed. I verified that `set_maximum_drawable_count` is being called with either `1` or `2`, but I perceive no difference between the two.
Connections
Immediate
as initially proposed hereDescription
Previously, the swapchain buffer count was hardcoded to be always 3 no matter what is supported. This PR exposes the desired swap chain count as a minimal extension to the wgc presentation setup.
Among other things this allows one now to switch between "Game Mode" and "Classic Game Mode" as classified here
https://www.intel.com/content/www/us/en/developer/articles/code-sample/sample-application-for-direct3d-12-flip-model-swap-chains.html
Prime motivation for this is to reduce perceived lag in egui applications where a buffer count of 2 makes usually more sense (assuming a stable and low workload).
Testing
Experimented with buffer count of 2 in egui example on Mac & Windows. Gives a lot less latency.
Checklist
cargo fmt
.cargo clippy
. If applicable, add:--target wasm32-unknown-unknown
--target wasm32-unknown-emscripten
cargo xtask test
to run tests.CHANGELOG.md
. See simple instructions inside file.