Skip to content

Explore other potential approaches to bevy system piping. #8857

@ZBemo

Description

@ZBemo

What problem does this solve or what need does it fill?

As bevy system piping currently is, it is essentially useless to users.

As I understand it, currently piping systems is essentially composing two functions into one system, which has all query parameters as part of it, and has no performance benefits. In most cases, instead of using bevy system piping it is both more ergonomic and nearly the same amount of boiler plate to simply write an enclosing system for two or more functions than to use bevy piping.

To a user, this is what a intuitive method of achieving piping without making use of bevy piping might look like:

#[derive(Component)]
struct FooMarker();
#[derive(Component)]
struct Bar {
    tint: Color,
}

/// calculate positions one above each foo
fn calc_above_foos(foo_transforms: &Query<&Transform, With<FooMarker>>) -> Vec<Vec3> {
    foo_transforms
        .iter()
        .map(|transform| transform.translation + Vec3::Z)
        .collect()
}

/// set any bar that is one above a foo to a red tint
fn update_bars(
    above_foos: Vec<Vec3>,
    bars: &mut Query<(&Transform, &mut Bar), Without<FooMarker>>,
) {
    bars.iter_mut().for_each(|(transform, mut bar)| {
        if above_foos.contains(&transform.translation) {
            bar.tint = Color::RED;
        } else {
            bar.tint = Color::WHITE;
        }
    });
}

/// "pipe" the two functions together
fn tint_bars_above_foo(
    foo_transforms: Query<&Transform, With<FooMarker>>,
    mut bars: Query<(&Transform, &mut Bar), Without<FooMarker>>,
) {
    update_bars(calc_above_foos(&foo_transforms), &mut bars)
}

Although this is intuitive, and requires less generic magic than bevy system piping, while allowing finer control between how functions are composed, it has two clear major downside:

  1. Performance: Current bevy piping suffers from this same issue, that all queries required by any function piped into the system is seen as a dependency for the entire duration of the pipeline. My proposed design would likely allow bevy to optimize this such that dependencies are only needed during the specific function that queries them.

  2. Boiler Plate: Although this is nearly idiomatic rust code, bevy piping neatly cleans up this pattern such that the user doesn't find themselves writing boilerplate pipeline glue functions like tint_bars_above_foo 20 times for each program.

System piping is a significant step up from this naive approach, as it addresses the second issue of boiler plate in a far cleaner way. However, I believe that until system piping also brings with it performance gains over the naive solution, it will be wasting a significant amount of potential.

The first problem with this naive code however, makes the potential performance gains of bevy piping jump out to me, as it shows that having dependencies be analyzed per-pipeline instead of for each individual system can easily lead to missed parallelism. It seems that this is a well known and documented drawback of system piping, however it cannot be avoided ergonomically at the user level, even with a bespoke piping system.

What solution would you like?

A more performance minded individual might attempt to do something like this, in order to have bevy analyze each component of the pipeline as having separate dependencies, thus increasing parallelism.

#[derive(Resource)]
struct OneAboveFoos(Vec<Vec3>);

#[derive(Component)]
struct FooMarker();
#[derive(Component)]
struct Bar {
    tint: Color,
}

/// calculate positions one above each foo
fn calc_above_foos(
    mut above_foos: ResMut<OneAboveFoos>,
    foo_transforms: Query<&Transform, With<FooMarker>>,
) {
    above_foos.0 = foo_transforms
        .iter()
        .map(|transform| transform.translation + Vec3::Z)
        .collect();
}

/// set any bar that is one above a foo to a red tint
fn update_bars(above_foos: Res<OneAboveFoos>, mut bars: Query<(&Transform, &mut Bar)>) {
    bars.iter_mut().for_each(|(transform, mut bar)| {
        if above_foos.0.contains(&transform.translation) {
            bar.tint = Color::RED;
        } else {
            bar.tint = Color::WHITE;
        }
    });
}

// ..

fn main() {
    App::new()
    //..
    .add_systems((calc_above_foos,update_bars).chain())
    //.. 
    ;
}

This is a big step up from both bevy piping, and the naive solution in terms of performance, however it still has a boilerplate disadvantage, and is less idiomatic rust, as it essentially recreates the in/out variable pattern common in C++ and C#, which is mainly discouraged in rust. However, the parallelization gains are likely worth this non-idiomatic approach in performance sensitive systems.

However, I think that bevy piping could be updated to ergonomically abstract over this approach, allowing the user to write cleaner and more idiomatic rust code, while still reaping the exact same, or potentially better performance benefits.

I suggest that we have the In types for function arguments abstract over Res, with returns abstracting over ResMut like in the example, while keeping an api similar to current bevy piping. This would allow users to write both performant and idiomatic code and might look something like the code below.

Note that for increased ergonomics I use pipe() like the chain operator, where (sys_1,sys_2,sys_3).pipe() pipes sys_1 into sys_2, sys_2 into sys_3.

/// a function that would be in the middle of a pipeline, taking data from previous steps as well
/// as generating its own data
fn do_long_processing_to_foos(bars: Query<&Transform, With<Foo>>) -> Vec<Vec3> {
    //...
    todo!()
}
/// a function that would be in the middle of a pipeline, taking data from previous steps as well
/// as generating its own data
fn find_bars_in_relationship_to_foo(
    foo_locations_after_lengthy_processing: In<Vec<Vec3>>,
    bars: Query<&Transform, With<Bar>>,
) -> Vec<Vec3> {
    //...
    todo!()
}

/// display data generated by pipeline to user. Note how once this boils down, [`find_bars_in_relationship_to_foo`] and
/// [`do_long_processing_to_foos`] would be able to run parallel to systems that need read or write
/// access to [`UI`] components, even though part of the pipeline accesses them mutably
fn display_foobars(data: In<Vec<Vec3>>,mut ui_q: Query<&mut UI>) {
    //..
}

fn main () {

    App::new() 
    //...
    .add_systems((do_long_processing_to_foos,find_bars_in_relationship_to_foo,display_foobars).pipe())
    //..
    ;
}

// This would boil down to something like this 
//
// That might require using something like #[Derive(PipelineComponent)] on functions in order to
// generate proper resource structs, but would still be little boilerplate for potentially large
// performance benefits.

#[derive(Resource,Deref,DerefMut)]
struct StepOneOutput(Vec<Vec3>);
#[derive(Resource,Deref,DerefMut)]
struct StepTwoOutput(Vec<Vec3>);

fn display_foobars_outer(data: Res<StepTwoOutput>,mut ui_q: Query<&mut UI>) {
    // apply wrapper type
    let data = In::new(data.0);
    // call function
     display_foobars(data, ui_q);
}

fn find_bars_in_relationship_to_foo_outer(
    mut out: ResMut<StepTwoOutput>,
    foo_locations_after_lengthy_processing: Res<StepOneOutput>,
    bars: Query<&Transform, With<Bar>>,
) {
    let data = In::new(foo_locations_after_lengthy_processing.0);
    let fn_out =find_bars_in_relationship_to_foo(data, bars);
    out.0 = fn_out;
}


fn main () {
    
    App::new() 
    //...
    .add_systems((do_long_processing_to_foos_outer,find_bars_in_relationship_to_foo_outer,display_foobars_outer).chain())
    //..
    ;
}

I believe that this system is possible to generate with bevy as it is now, and would be to great benefit to end users. I am however, unfamiliar with bevy internals and understand if for some reason this is infeasible or currently not worth the effort, but still believe it would be very useful to the bevy project to reconsider system piping such that it allows separate dependency analysis for each component of the pipeline in some way, even though it likely won't end up looking like anything I have suggested.

What alternative(s) have you considered?

You could keep piping the way it is, with extremely suboptimal performance and encourage end users to vendor their own piping solution for each project, wasting time for everybody, and leading to non-idiomatic code.

Additional context

I'm not familiar with bevy internals, so their may be some obvious things I'm missing out on

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-ECSEntities, components, systems, and eventsC-FeatureA new feature, making something new possibleX-ControversialThere is active debate or serious implications around merging this PR

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions