-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce Layers in the Renderer #430
Conversation
There are a lot of elements that are basically almost entirely static on the layout. Other elements such as the timer are frequently changing. In the browser we already noticed that by pushing the timer onto its own layer, you get a fairly big performance boost out of it. Here's that Pull Request: LiveSplit/LiveSplitOne#378 We want to apply a similar idea to the native renderer. In order to do this we first introduce flags in the layout state that mark whether a component or a part of a component is at the specific time considered frequently changing. By doing this not only does the native renderer benefit from it, but other renderers such as the web version can also make decisions based on that, as opposed to only applying the optimization to the timer component. With all the components providing these flags, the renderer can now split up the frame into a bottom layer and a top layer where all the frequently changing elements are rendered onto the top layer and all the other elements rendered onto the bottom layer. In fact the top layer is actually the frame buffer and it simply gets cleared by copying over regions from the bottom layer. That way there is no additional compositing necessary. Additionally the renderer is now more of a scene manager that manages entities in a scene rather than directly emitting draw calls to a backend. The scene is a data structure that allows traversing the two layers in any way a renderer (i.e. what used to be a backend) would want to render them out. So the whole design is a lot more decoupled now.
@@ -7,7 +7,7 @@ use crate::{analysis, timing::Snapshot, TimeSpan, TimerPhase}; | |||
/// Calculates the current pace of the active attempt based on the comparison | |||
/// provided. If there's no active attempt, the final time of the comparison is | |||
/// returned instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also have something like this here? "Additionally a boolean is returned that indicates if the value is currently actively changing as time is being lost."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that should've probably mentioned it. I'll try to sneak it into some future commit.
} else if min_y <= max_y { | ||
let stride = 4 * stride as usize; | ||
let min_y = stride * (min_y - 1.0) as usize; | ||
let max_y = stride * ((max_y + 2.0) as usize).min(height as usize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning behind the calculations here? I guess I don't really understand where these numbers are coming from.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we intend to rerender the top layer, for that we need to copy all the pixels from the bottom layer over so the top layer has a clean slate to work with. However not actually all the pixels need to be copied over, only those that the top layer actually intends to draw on top of (and somewhat from the previous frame, as the top layer might've moved, so we potentially need to replace some pixels where we now intend to not have anything in the top layer anymore). So we calculate a bounding box for the top layer and merge it with the previous frame's bounding box. I haven't benchmarked this, but I wanted it to be a single copy operation, so what I do instead is I copy consecutive image rows, so it can be a big memcpy of n pixels with n being a multiple of the image width. min_y and max_y get rounded down / up respectively here, so we don't miss any pixels as those y values are floating point. It might make sense for this to be a two dimensional loop instead where we also limit the copying on the x-axis. Not sure if the branching checking cost is higher than just bursting through and copying everything consecutively.
- Runs now support custom variables that are key value pairs that either the user can specify in the run editor or are provided by a script like an auto splitter. [#201](#201) - There is now an option in the run editor to generate a comparison based on a user specified goal time. This uses the same algorithm as the `Balanced PB` comparison but with the time specified instead of the personal best. [#209](#209) - Images internally are now stored as is without being reencoded as Base64 which was done before in order to make it easier for the web LiveSplit One to display them. [#227](#227) - The Splits.io API is now available under the optional `networking` feature. [#236](#236) - All key value based components share the same component state type now. [#257](#257) - The crate now properly supports `wasm-bindgen` and `WASI`. [#263](#263) - There is now a dedicated component for displaying the comparison's segment time. [#264](#264) - Compiling the crate without `std` is now supported. Most features are not supported at this time though. [#270](#270) - [`Splitterino`](https://github.com/prefixaut/splitterino) splits can now be parsed. [#276](#276) - The `Timer` component can now show a segment timer instead. [#288](#288) - Gamepads are now supported on the web. [#310](#310) - The underlying "skill curve" that the `Balanced PB` samples is now exposed in the API. [#330](#330) - The layout states can now be updated, which means almost all of the allocations can be reused from the previous frame. This is a lot faster. [#334](#334) - In order to calculate a layout state, the timer now provides a snapshot mechanism that ensures that the layout state gets calculated at a fixed point in time. [#339](#339) - Text shaping is now done via `rustybuzz` which is a port of `harfbuzz`. [#378](#378) - Custom fonts are now supported. [#385](#385) - The renderer is not based on meshes anymore that are suitable for rendering with a 3D graphics API. Instead the renderer is now based on paths, which are suitable for rendering with a 2D graphics API such as Direct2D, Skia, HTML Canvas, and many more. The software renderer is now based on `tiny-skia` which is so fast that it actually outperforms any other rendering and is the recommended way to render. [#408](#408) - Remove support for parsing `worstrun` splits. `worstrun` doesn't support splits anymore, so `livesplit-core` doesn't need to keep its parsing support. [#411](#411) - Remove support for parsing `Llanfair 2` splits. `Llanfair 2` was never publicly available and is now deleted entirely. [#420](#420) - Hotkeys are now supported on macOS. [#422](#422) - The renderer is now based on two layers. A bottom layer that rarely needs to be rerendered and the top layer that needs to be rerendered on every frame. Additionally the renderer is now a scene manager which manages a scene that an actual rendering backend can then render out. [#430](#430) - The hotkeys are now based on the [UI Events KeyboardEvent code Values](https://www.w3.org/TR/uievents-code/) web standard. [#440](#440) - Timing is now based on `CLOCK_BOOTTIME` on Linux and `CLOCK_MONOTONIC` on macOS and iOS. This ensures that all platforms keep tracking time while the operating system is in a suspended state. [#445](#445) - Segment time columns are now formatted as segment times. [#448](#448) - Hotkeys can now be resolved to the US keyboard layout. [#452](#452) - They hotkeys are now based on `keydown` instead of `keypress` in the web. `keydown` handles all keys whereas `keypress` only handles visual keys and is also deprecated. [#455](#455) - Hotkeys can now be resolved to the user's keyboard layout on both Windows and macOS. [#459](#459) and [#460](#460) - The `time` crate is now used instead of `chrono` for keeping track of time. [#462](#462) - The scene manager now caches a lot more information. This improves the performance a lot as it does not need to reshape the text on every frame anymore, which is a very expensive operation. [#466](#466) and [#467](#467) - The hotkeys on Linux are now based on `evdev`, which means Wayland is now supported. Additionally the hotkeys are not consuming the key press anymore. [#474](#474) - When holding down a key, the hotkey doesn't repeat anymore on Linux, macOS and WebAssembly. The problem still occurs on Windows at this time. [#475](#475) and [#476](#476)
There are a lot of elements that are basically almost entirely static on the layout. Other elements such as the timer are frequently changing. In the browser we already noticed that by pushing the timer onto its own layer, you get a fairly big performance boost out of it.
Here's that Pull Request:
LiveSplit/LiveSplitOne#378
We want to apply a similar idea to the native renderer. In order to do this we first introduce flags in the layout state that mark whether a component or a part of a component is at the specific time considered frequently changing. By doing this not only does the native renderer benefit from it, but other renderers such as the web version can also make decisions based on that, as opposed to only applying the optimization to the timer component.
With all the components providing these flags, the renderer can now split up the frame into a bottom layer and a top layer where all the frequently changing elements are rendered onto the top layer and all the other elements rendered onto the bottom layer. In fact the top layer is actually the frame buffer and it simply gets cleared by copying over regions from the bottom layer. That way there is no additional compositing necessary.
Additionally the renderer is now more of a scene manager that manages entities in a scene rather than directly emitting draw calls to a backend. The scene is a data structure that allows traversing the two layers in any way a renderer (i.e. what used to be a backend) would want to render them out. So the whole design is a lot more decoupled now.