-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bevy_pbr2: Use the infinite reverse right-handed perspective projection #16
base: pipelined-rendering
Are you sure you want to change the base?
bevy_pbr2: Use the infinite reverse right-handed perspective projection #16
Conversation
We're gonna need to adapt this to the omni light changes. A naive merge conflict resolution resulted in no shadows being drawn. |
This is probably due to the projection calculation being done in the frag shader? (we probably need to adjust it to account for the depth changes?) |
Yeah, we’re aware of that. I’ll fix it later. |
Fixed the shadow projection issue. I've also included the flipping of the near/far when extracting the orthographic projection for the 3D pipelines because it doesn't break 2D and otherwise orthographic depth will be broken in the 3D pipelines. I don't really like this approach of swapping them in the This is the orthographic projection matrix: /// Creates a right-handed orthographic projection matrix with [0,1] depth range.
fn orthographic_rh(left: T, right: T, bottom: T, top: T, near: T, far: T) -> Self {
let rcp_width = T::ONE / (right - left);
let rcp_height = T::ONE / (top - bottom);
let r = T::ONE / (near - far);
Self::from_cols(
V4::new(rcp_width + rcp_width, T::ZERO, T::ZERO, T::ZERO),
V4::new(T::ZERO, rcp_height + rcp_height, T::ZERO, T::ZERO),
V4::new(T::ZERO, T::ZERO, r, T::ZERO),
V4::new(
-(left + right) * rcp_width,
-(top + bottom) * rcp_height,
r * near,
T::ONE,
),
)
} So if near and far are swapped, the affected terms are Ideas welcome. |
1eddb6b
to
8aa843a
Compare
I think we just need to add the normal bias as well. It’s important that the orthographic projection is a fairly tight fit around the geometry. Did you make the change to add the bias to the fragment depth with infinite reverse instead of subtract? |
I have just pushed a change that moves the depth biasing to linear depth. So, before we would project the fragment position to light space NDC and then add a depth bias there. I found that quite unintuitive to work with. Some of us in #rendering looked over Unity, Unreal, and Godot. Unity and Godot only really have two bias configuration values, a single depth bias and a single normal bias. Unreal seemed to have all sorts of biases for lots of different cases and was difficult to understand. Godot was much simpler (which should be a good starting place) and has the depth bias for all lights, and a normal bias only for directional lights. But, the difference is that the depth bias is applied in world space. The currently-committed biasing now looks like this: pub shadow_depth_bias: f32,
pub shadow_normal_bias: f32,
...
pub const DEFAULT_SHADOW_DEPTH_BIAS: f32 = 0.1;
pub const DEFAULT_SHADOW_NORMAL_BIAS: f32 = 0.1; And in let depth_bias = light.shadow_depth_bias * dir_to_light.xyz;
let NdotL = dot(dir_to_light.xyz, in.world_normal.xyz);
let normal_bias = light.shadow_normal_bias * (1.0 - NdotL) * in.world_normal.xyz;
let biased_position = vec4<f32>(in.world_position.xyz + depth_bias + normal_bias, in.world_position.w); The depth bias moves the fragment world position toward the light and is in world units. The normal bias moves the fragment world position along the surface normal as the light ray strikes the surface at more of a glancing angle, and is also in world units. Working with world units, and only having a depth and normal bias feels much more intuitive and produces the same results. Also note that this is projection-independent! There is still a slight ring of acne from the directional light: I'm trying to figure out how to get rid of those damn rings but I'm not there yet. Perhaps someone else who has worked on shadows knows where they come from. My next step is to try to debug those particular pixels to understand what is going on there. This article has an interesting approach of adjusting the normal bias based on the shadow texel size, because the self-shadowing problem is that the volume covered by the shadow texel is popping out of the surface. It would be great if this could be configured programmatically... |
This is now based on top of #26 . Also, while I did like the flexibility of being able to choose the projection, in #rendering the question of "why?" came up along with @cart suggesting that it is probably better for users to choose one perspective projection and stick with it to avoid confusion. I agree. As such I reverted the perspective projection method code and now this PR is back to using infinite reverse for perspective cameras and point light shadows, and orthographic with reverse z (for consistency) for orthographic cameras (I've tested that it doesn't negatively impact sprites nor 3D orthographic) and directional light shadows. So if this is agreeable, it can be merged after merging #26 . |
05bd717
to
955c79f
Compare
94f674d
to
27f668a
Compare
Now that we have main features, lets use them!
Makes the "Render App World" directly available to Extract step systems as a `RenderWorld` resource. Prior to this, there was no way to directly read / write render world state during the Extract step. The only way to make changes was through Commands (which were applied at the end of the stage). ```rust // `thing` is an "app world resource". fn extract_thing(thing: Res<Thing>, mut render_world: ResMut<RenderWorld>) { render_world.insert_resource(ExtractedThing::from(thing)); } ``` RenderWorld makes a number of scenarios possible: * When an extract system does big allocations, it is now possible to reuse them across frames by retrieving old values from RenderWorld (at the cost of reduced parallelism from unique RenderWorld borrows). * Enables inserting into the same resource across multiple extract systems * Enables using past RenderWorld state to inform future extract state (this should generally be avoided) Ultimately this is just a subset of the functionality we want. In the future, it would be great to have "multi-world schedules" to enable fine grained parallelism on the render world during the extract step. But that is a research project that almost certainly won't make it into 0.6. This is a good interim solution that should easily port over to multi-world schedules if/when they land.
This decouples the opinionated "core pipeline" from the new (less opinionated) bevy_render crate. The "core pipeline" is intended to be used by crates like bevy_sprites, bevy_pbr, bevy_ui, and 3rd party crates that extends core rendering functionality.
It is useful to see which adapter is being used and which wgpu backend.
on nightly these two clippy lints fail: - [needless_borrow](https://rust-lang.github.io/rust-clippy/master/#needless_borrow) - [unused_unit](https://rust-lang.github.io/rust-clippy/master/#unused_unit)
Fix new nightly clippy lints on `pipelined-rendering`
# Objective Port bevy_gltf to the pipelined-rendering branch. ## Solution crates/bevy_gltf has been copied and pasted into pipelined/bevy_gltf2 and modifications were made to work with the pipelined-rendering branch. Notably vertex tangents and vertex colours are not supported. Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective Restore the functionality of sprite atlases in the new renderer. ### **Note:** This PR relies on bevyengine#2555 ## Solution Mostly just a copy paste of the existing sprite atlas implementation, however I unified the rendering between sprites and atlases. Co-authored-by: Carter Anderson <mcanders1@gmail.com>
# Objective This fixes not having access to StorageTextureAccess in the API, which is needed for using storage textures ## Solution Added it to the use in render_resource module Co-authored-by: Dimas <skythedragon@outlook.com>
# Objective - Allow you to compile Bevy with the `bevy_sprite2` feature, but without the `bevy_pbr2` feature. - This currently fails because the `bevy_sprite2` crate does not require the `derive` feature of the `bytemuck` crate in its `Cargo.toml`, even though it is required to compile. ## Solution - Add the `derive` feature of `bytemuck` to the `bevy_sprite2` crate
# Objective - Prevent the need to specify a sprite size when using the pipelined sprite renderer ## Solution - Re-introduce the sprite auto resize system from the old renderer
# Objective Fix ComputePipelineDescriptor missing from WGPU exports ## Solution Added it to the pub use wgpu::{ ... } Co-authored-by: Dimas <skythedragon@outlook.com>
# Objective - Allow the user to set the clear color when using the pipelined renderer ## Solution - Add a `ClearColor` resource that can be added to the world to configure the clear color ## Remaining Issues Currently the `ClearColor` resource is cloned from the app world to the render world every frame. There are two ways I can think of around this: 1. Figure out why `app_world.is_resource_changed::<ClearColor>()` always returns `true` in the `extract` step and fix it so that we are only updating the resource when it changes 2. Require the users to add the `ClearColor` resource to the render sub-app instead of the parent app. This is currently sub-optimal until we have labled sub-apps, and probably a helper funciton on `App` such as `app.with_sub_app(RenderApp, |app| { ... })`. Even if we had that, I think it would be more than we want the user to have to think about. They shouldn't have to know about the render sub-app I don't think. I think the first option is the best, but I could really use some help figuring out the nuance of why `is_resource_changed` is always returning true in that context.
27f668a
to
3a0074c
Compare
…e#2703) # Objective The default perspective projection near plane being at 1 unit feels very far away if one considers units to directly map to real world units such as metres. Not being able to see anything that is closer than 1m is unnecessarily limiting. Using a default of 0.1 makes more sense as it is difficult to even focus on things closer than 10cm in the real world. ## Solution - Changed the default perspective projection near plane to 0.1.
# Objective A question was raised on Discord about the units of the `PointLight` `intensity` member. After digging around in the bevy_pbr2 source code and [Google Filament documentation](https://google.github.io/filament/Filament.html#mjx-eqn-pointLightLuminousPower) I discovered that the intention by Filament was that the 'intensity' value for point lights would be in lumens. This makes a lot of sense as these are quite relatable units given basically all light bulbs I've seen sold over the past years are rated in lumens as people move away from thinking about how bright a bulb is relative to a non-halogen incandescent bulb. However, it seems that the derivation of the conversion between luminous power (lumens, denoted `Φ` in the Filament formulae) and luminous intensity (lumens per steradian, `I` in the Filament formulae) was missed and I can see why as it is tucked right under equation 58 at the link above. As such, while the formula states that for a point light, `I = Φ / 4 π` we have been using `intensity` as if it were luminous intensity `I`. Before this PR, the intensity field is luminous intensity in lumens per steradian. After this PR, the intensity field is luminous power in lumens, [as suggested by Filament](https://google.github.io/filament/Filament.html#table_lighttypesunits) (unfortunately the link jumps to the table's caption so scroll up to see the actual table). I appreciate that it may be confusing to call this an intensity, but I think this is intended as more of a non-scientific, human-relatable general term with a bit of hand waving so that most light types can just have an intensity field and for most of them it works in the same way or at least with some relatable value. I'm inclined to think this is reasonable rather than throwing terms like luminous power, luminous intensity, blah at users. ## Solution - Documented the `PointLight` `intensity` member as 'luminous power' in units of lumens. - Added a table of examples relating from various types of household lighting to lumen values. - Added in the mapping from luminous power to luminous intensity when premultiplying the intensity into the colour before it is made into a graphics uniform. - Updated the documentation in `pbr.wgsl` to clarify the earlier confusion about the missing `/ 4 π`. - Bumped the intensity of the point lights in `3d_scene_pipelined` to 1600 lumens. Co-authored-by: Carter Anderson <mcanders1@gmail.com>
This is a rather simple but wide change, and it involves adding a new `bevy_app_macros` crate. Let me know if there is a better way to do any of this! --- # Objective - Allow adding and accessing sub-apps by using a label instead of an index ## Solution - Migrate the bevy label implementation and derive code to the `bevy_utils` and `bevy_macro_utils` crates and then add a new `SubAppLabel` trait to the `bevy_app` crate that is used when adding or getting a sub-app from an app.
Makes some tweaks to the SubApp labeling introduced in bevyengine#2695: * Ergonomics improvements * Removes unnecessary allocation when retrieving subapp label * Removes the newly added "app macros" crate in favor of bevy_derive * renamed RenderSubApp to RenderApp @zicklag (for reference)
…2700) # Objective Add support for configurable shadow map sizes ## Solution - Add `DirectionalLightShadowMap` and `PointLightShadowMap` resources, which just have size members, to the app world, and add `Extracted*` counterparts to the render world - Use the configured sizes when rendering shadow maps - Default sizes remain the same - 4096 for directional light shadow maps, 1024 for point light shadow maps (which are cube maps so 6 faces at 1024x1024 per light)
…2726) # Objective Allow marking meshes as not casting / receiving shadows. ## Solution - Added `NotShadowCaster` and `NotShadowReceiver` zero-sized type components. - Extract these components into `bool`s in `ExtractedMesh` - Only generate `DrawShadowMesh` `Drawable`s for meshes _without_ `NotShadowCaster` - Add a `u32` bit `flags` member to `MeshUniform` with one flag indicating whether the mesh is a shadow receiver - If a mesh does _not_ have the `NotShadowReceiver` component, then it is a shadow receiver, and so the bit in the `MeshUniform` is set, otherwise it is not set. - Added an example illustrating the functionality. NOTE: I wanted to have the default state of a mesh as being a shadow caster and shadow receiver, hence the `Not*` components. However, I am on the fence about this. I don't want to have a negative performance impact, nor have people wondering why their custom meshes don't have shadows because they forgot to add `ShadowCaster` and `ShadowReceiver` components, but I also really don't like the double negatives the `Not*` approach incurs. What do you think? Co-authored-by: Carter Anderson <mcanders1@gmail.com>
5c3c316
to
8b686c1
Compare
Haha I did a rebase on this branch to "try to fix the history", but it was already fine. I was just looking at the wrong pr in the wrong repo :) |
# Objective - Avoid unnecessary work in the vertex shader of the numerous shadow passes - Have the natural order of bind groups in the pbr shader: view, material, mesh ## Solution - Separate out the vertex stage of pbr.wgsl into depth.wgsl - Remove the unnecessary calculation of uv and normal, as well as removing the unnecessary vertex inputs and outputs - Use the depth.wgsl for shadow passes - Reorder the bind groups in pbr.wgsl and PbrShaders to be 0 - view, 1 - material, 2 - mesh in decreasing order of rebind frequency
…ction This projection is industry standard and offers better precision distribution over the depth range than forward projections.
…forms view, inverse view, projection, and inverse projection are commonly-used matrices from the view. model and inverse transpose model are commonly-used matrices for meshes.
Normals should be transformed by the inverse transpose of the model matrix to correctly transform their scale.
A projection is not orthographic if the view projection matrix bottom-right element is 1.0, rather if the projection matrix bottom-right value is 1.0. Now the projection matrix is bound, this fix is possible.
8b686c1
to
dc9e501
Compare
f0457e0
to
cf8a4ff
Compare
This has much better precision distribution over the [0,1] range.
NOTE: The initial PR does not address use of orthographic projections in the PBR pipelines. However, I have tested and it seems that swapping the orthographic projection's near and far to reverse the depth direction works just fine.