Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UI logic: systems and event handlers #25

Closed

Conversation

alice-i-cecile
Copy link
Member

@alice-i-cecile alice-i-cecile commented May 27, 2021

RENDERED

This proposal discusses, in practical terms, how to use the existing(ish) tools of bevy_ecs to write maintainable, expressive and low-boilerplate UI code.

The core tools are surprisingly simple:

  1. Per-entity events
  2. A simple but unified input dispatching convention
  3. Plain old systems
  4. An EventHandler component that reads in events of the corresponding type to mutate the world in a later command
  5. Change detection and events to respond to UI changes

@alice-i-cecile
Copy link
Member Author

alice-i-cecile commented May 27, 2021

From my perspective, this design is tentatively complete. The next steps here are:

  1. Create a basic implementation of this, focusing on Per entity Events bevy#2116 and the callback pattern.
  2. Build a reasonably complex app (e.g. a todo list) in it using the existing bevy_ui tools to verify that it can actually handle everything we want from it.
  3. If that's too frustrating, get relations merged to improve related-but-out-of-scope QoL issues. I'm definitely concerned that manipulating the data on child entities is going to be too tedious without relations.

I'm going to be tackling the new book first though, so it'll be a couple weeks before I can start on that in earnest.

2. Changing which UI elements are displayed (e.g. swapping tabs, pulling up a menu): this is precisely what `on_enter` and `on_exit` system sets in `States` are intended to solve.
3. Handling layout changes: this should be done automatically in a single system (or group of systems) that runs in `CoreStage::PostUpdate`, rather than being scatter across our logic.

For everything else, change detection and events should be more than adequate.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generalizing behavior gives the example of an ability task bar to show how this will scale to more complicated UIs that can't be defined at compile time.

I don't know if I agree with this section because the actual list of behavior I expect from a task bar is something like the following list:

  1. There is not a 1:1 relationship to abilities and slots in the task bar. At lower levels, the player may have fewer abilities than slots. Slots will need to support being "unassigned". At higher levels, the player may have more abilities than slots.

  2. An ability doesn't need to be in the task bar. It should work correctly even when not assigned to a slot.

  3. It's possible for the player to reassign abilities and slots, e.g. by drag'n'dropping. The same slot does not always correspond to the same ability. It may be possible for a player to assign the same ability to multiple slots, or attempting to do that may result in the first assignment being replaced by subsequent assignments.

  4. The player can press a hot key associated with the slot (usually a number key 0-9), or they can click the slot. Either will activate the ability. It may be possible for the number keys to be remapped to other user-defined keys. It may be possible for the player to assign a key directly to an ability without requiring the ability to be in a task bar.

  5. If the ability isn't ready, there will be some audio & visual feedback in response to trying to activate the ability.

  6. There will be visual feedback when an ability is ready to use, on cooldown, and in progress (e.g. wind-up or other animation delays, or backswing on casting). This should work regardless of activation method -- by clicking the task bar, by pressing a hot key for that slot, by pressing an assigned key for the ability, or by selecting the ability out of the player's full list of abilities.

  7. It may be possible to cancel out of the ability after starting it. Cancelling an ability has visual feedback.

  8. The visual feedback for cooldown shows the time remaining in some intuitive way, e.g. a horizontal or radial wipe, or even just seconds counting down until usable again.

  9. Mousing over a slot should show a tool tip displaying the ability description. This may also display contextual information, e.g. if it's on cooldown or if the player character has some status effect preventing the use of the ability. The tool tip does not pop up immediately but only after some time hovering the slot.

  10. Right clicking a slot should bring up a context menu, possibly related to the ability.

  11. The player may be allowed to maintain separate task bar configurations and swap between them (maybe with F1, F2, and so on).

  12. The size of the task bar may change to only show slots that have abilities assigned to them.

  13. The size of the task bar may change because the player can expand it or shrink it in some fashion. As an example, it might normally be one row but the player can optionally make it more than one row. The player's configuration for other task bar configurations should be remembered between configuration changes.

  14. The ability in the slot will determine the visual look of the slot. This is normally just an icon, but in some games this may include additional visual information tied a specific ability -- e.g. an ability that power ups with subsequent kills may show, on the slot itself, the current number of kills.

  15. Sometimes the task bar will show the name of the ability under each slot. The ability name may need to be localized.

  16. Sometimes a game allows the player to drag the task bar to another orientation, e.g. vertical instead of horizontal.

Copy link
Member Author

@alice-i-cecile alice-i-cecile May 28, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool: this sounds like a fun challenge. I'll try mocking this out, and see how it might look. Some of this may be a bit hand-wavey (e.g. layout and animation), and I'll try to note which systems I expect would just be part of the engine (or a 3rd party crate).

It will absolutely be complex, but that's the nature of tacking on a ton of features :)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I add to this example widget focus / indirect UI navigation.

The task bar can also have a "focused" ability. Pressing Tab will set the focused ability to the next one in the current task bar. Pressing Shift+Tab will set the focused ability to the previous one in the task bar. Pressing Space will (attempt to) activate the focused ability (if one is focused). The actions done by Tab, Shift+Tab and Space may be able to be re-bound to other keys (or key combinations).

@alice-i-cecile
Copy link
Member Author

From @DavidVonDerau:

Let's say the hook is "when this entity dies". We'd like to fire off that event from some system, e.g. HealthSystem, s.t. whenever health < 0, we can create an EntityDied event with that entity ID.

I think we add to the app schedule .add_system_to_stage(..., add_hook::.system())

Now HealthSystem can create the EntityDied events and we have a system in the schedule that can run any callbacks for that event. So now let's say we have 2 types of creatures -- a Big Slime and a Little Slime. When a Big Slime dies, we want to replace it with a Little Slime, and when the Little Slime dies, we want it to be removed. What does that look like?

I'm going to tackle this in this RFC and extend the section on hooks at the very bottom to include this :)

@alice-i-cecile
Copy link
Member Author

alice-i-cecile commented May 29, 2021

As I play with the generalized Hook trait more, I'm increasingly convinced that this should just be a standard game logic feature. It's super expressive, and solves the "specialized behavior without arbitrary bloat" problem really well even outside of UI.

If others agree, I think that we should split that out into its own RFC and implement the Callback behavior described here in terms of a built-in Hook.

EDIT: this should make this proposal much clearer and is compelling in its own right; I'm marking this as draft until that's done. The simplification of the logic makes this suitable as a side note in this RFC again!

BONUS EDIT: @TheRawMeatball proposes that I use Fn(&mut World) in place of my Callback type, which I think helps a lot.

let on_death = OnDeath::new(|world, entity| {
  let me = world.entity(entity);
  me.remove::<BigSlime>();
  me.insert(LittleSlime);
  // Overwrites old value of Life component
  me.insert(Life(LITTLE_SLIME_LIFE));
});

@alice-i-cecile alice-i-cecile marked this pull request as draft May 29, 2021 04:21
@alice-i-cecile alice-i-cecile changed the title UI logic: systems and callbacks UI logic: systems and event handlers May 30, 2021
@alice-i-cecile
Copy link
Member Author

This post by the author of minigene discusses the importance of clear data flow and input / action separation in a way that feels very similar to the framework laid out here. I don't think it adds enough new content to be worth discussing in its own section as prior art, but it was interesting to see.

If others end up feeling differently, I'm happy to write it up.

rfcs/25-ui-systems-event-handlers.md Outdated Show resolved Hide resolved
Co-authored-by: Federico Rinaldi <gisquerin@gmail.com>
@DavidVonDerau
Copy link

This post by the author of minigene discusses the importance of clear data flow and input / action separation in a way that feels very similar to the framework laid out here. I don't think it adds enough new content to be worth discussing in its own section as prior art, but it was interesting to see.

If others end up feeling differently, I'm happy to write it up.

If you're interested in other non-Rust specific reading on this, the terms to research are "layered architecture", "tiered architecture", and "onion architecture". It's buzzword-y and it's normally defined in terms of business applications with a UI, rules, and a database, but it's really a general concept of designing a software application so that less specific inputs ("mouse click") can be turned into something more specific ("button clicked"), and then acted on in a domain-specific manner ("save game clicked") without that logic being haphazardly located across the entire code-base.

So you have a raw UI inputs layer that generates mouse clicks, key presses, controller inputs, and then that goes into another UI input layer to toss away irrelevant actions (clicking on an empty part of the screen) and classify things into concrete UI events like buttons / selections / text entry / whatever, and then another layer specific to the game can deal with game data like "the user clicked the SAVE button".

The thing that makes layering something other than just chains of function calls through N classes is that normally the layers are decoupled through interfaces and events -- so the game specific layers would have zero knowledge of raw UI inputs, and the code that transforms raw UI inputs into button clicks has zero knowledge of any of the game specific code, etc.

@tower120
Copy link

tower120 commented Jun 7, 2021

I'm reading this... And event-in-entity iteration looks like flattened component's container's traverse. What if for sake of generalization make some public accessible facility which would allow to fast-query components with non-empty containers? (Maybe create special Component-Contatiner for this?)
Could it be realized through tags bevyengine/bevy#1527?

@dannymcgee
Copy link

Unresolved Questions

  • How can we ergonomically control the order in which event handling commands are executed for entities triggering on the same type of event?

On the web, events always have an original "target" which is some DOM node (e.g., the button element that was clicked), and the event "bubbles up" to the root node (e.g., the window). Any intervening node along that path has the opportunity to read and handle the event in turn (including by intercepting it and preventing it from bubbling any further).

I have no idea if the UI "hierarchy" in Bevy is a real thing or just an abstraction for the layout engine, and I don't know what the performance consequences of trying to implement something like that might be, but from the perspective of an end-user, I think it's a pretty intuitive setup, because you can reuse the same mental model you already have of your UI structure to reason about the flow of events and data through the system.

That said, I'm honestly not 100% sure if I fully understand the question, so forgive me if that's not actually relevant to the problem here. :P

@alice-i-cecile
Copy link
Member Author

I'm not terribly happy with this design. I'm going to close it down for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants