Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

batching #33

Merged
merged 2 commits into from
Sep 14, 2023
Merged

batching #33

merged 2 commits into from
Sep 14, 2023

Conversation

robtfm
Copy link

@robtfm robtfm commented Sep 13, 2023

up to you if you prefer this

  • remove BatchMeta trait, use T: PartialEq instead
  • remove BatchState, track directly in the function
  • make GpuArrayIndex::index non-optional
  • added other.flags & MORPH | SKINNING == 0 to BatchMeta3d

&mut next_batch,
items.peek().and_then(|item| get_batch_meta(item)),
) else {
// if the current phase item doesn't match the query, we don't modify it
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This case would be hit if next_batch had been None, right? And that happens if either the .peek() returns None, or the subsequent get_batch_meta() returns None.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It isn't clear to me if/how this code handles the case of an entity in the phase not matching the query used in the get_batch_meta closure splitting this batch. For transparent phases it is important that things are drawn in back to front order according to the sorted phase. If we batch things returned by get_batch_meta() across the items where it returns None then we would violate that. Sorry if I'm just not seeing it and the code already takes care of this. :)

Copy link
Author

@robtfm robtfm Sep 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

edit: say the items look like [match3, no match, match2, match1] and we iterate backwards, so in the iterator we see:

[match1, match2, no match, match3], the code will go

loop 1: (batch_meta, ..) = match1, next_batch = match2
-> set batch start index
-> match1 matches match2 => skip update

loop 2: (batch_meta, ..) = match2, next_batch = None
-> (don't set batch start index as it's already set)
-> match2 doesn't match None => update with batch start index and current index == 0..=1

loop 3: (batch_meta, ..) fails, next_batch = match3
-> continue

loop 4: (batch_meta, ..) = match3, next_batch = None (as the iterator is empty)
-> set batch start index
-> match3 doesn't match None => update with batch start index and current index == 2..=2

so i think it should be ok?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and the phase.items look like:

data = [match3, no match, match2, match1]

initially: [phase item 1 = match3, phase item 2 = no match, phase item 3 = match2, phase item 4 = match1]
after loop1: [phase item 1 = match3, phase item 2 = no match, phase item 3 = match2, phase item 4 unchanged]
after loop2: [phase item 1 = match3, phase item 2 = no match, phase item 3 = 0..=1, phase item 4 unchanged]
after loop3: [phase item 1 = match3, phase item 2 unchanged, phase item 3 = 0..=1, phase item 4 unchanged]
after loop4: [phase item 1 =2..=2, phase item 2 unchanged, phase item 3 = 0..=1, phase item 4 unchanged]

so you're right there is an issue - because i iterate backwards the indexes don't line up between the GpuArrayBuffer indexes and the phase.items indexes. in this case phase item 2 has GpuArrayIndex 2 but will have item index 1... oops

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry no i take it back, if the item doesn't match then the initial batch range will be 0..1 which would still be correct. if the get_batch_meta returns None then the mesh doesn't get pushed on to the GpuArrayBuffer.

so i think it's all fine... but if the test didn't work then obviously not?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for the flurry of comments.

For transparent phases it is important that things are drawn in back to front order according to the sorted phase

i think this is a problem though. it'll end up running batches in the right/sorted order, but reversing the items within the batch.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok i pushed a change to remove the reverse. now i really reckon it'll work. but please run your interleaved test (or point me at it)

@@ -255,16 +255,6 @@ struct BatchMeta2d {
dynamic_offset: Option<NonMaxU32>,
}

impl BatchMeta<BatchMeta2d> for BatchMeta2d {
fn matches(&self, other: &BatchMeta2d) -> bool {
self.pipeline_id == other.pipeline_id
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is one aspect I've been considering - the order of comparison of members. Maybe it's much faster to compare the thing that is most likely to break a batch first, rather than the possibly least likely thing (the pipeline). Does a derived PartialEq compare in struct member order?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it does seem to. it uses the source AST tokens so it would make sense that it's in declared order.

self.pipeline_id == other.pipeline_id
&& self.draw_function_id == other.draw_function_id
&& self.mesh_asset_id == other.mesh_asset_id
&& (self.mesh_flags & (MeshFlags::SKINNED | MeshFlags::MORPH_TARGETS).bits()) == 0
&& (other.mesh_flags & (MeshFlags::SKINNED | MeshFlags::MORPH_TARGETS).bits()) == 0
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Making this symmetrical is good.

@superdump superdump merged commit a9e3585 into superdump:batching Sep 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants