Skip to content

Conversation

pcwalton
Copy link
Contributor

@pcwalton pcwalton commented Jun 14, 2024

This commit introduces a new type of camera, the omnidirectional camera. These cameras render to a cubemap texture, and as such extract into six different cameras in the render world, one for each side. The cubemap texture can be attached to a reflection probe as usual, which allows reflections to contain moving objects. To use an omnidirectional camera, create an OmnidirectionalCameraBundle.

Because omnidirectional cameras extract to six different sub-cameras in the render world, render world extraction code that targets components present on cameras now needs to be aware of this fact and extract components to the individual sub-cameras, not the root camera component. They also need to run after omnidirectional camera extraction, as only then will the sub-cameras be present in the render world. New plugins, ExtractCameraComponentPlugin and ExtractCameraInstancesPlugin, are available to assist with this.

Each side of an omnidirectional camera can be individually marked as active via the ActiveCubemapSides bitfield. This allows for the common technique of rendering only one (or two, or three) sides of the cubemap per frame, to reduce rendering overhead. It also allows for on-demand rendering, so that an application that wishes to optimize further can choose sides to refresh. For example, an application might wish to only rerender sides whose frusta contain moving entities.

In addition to real-time reflection probes, this patch introduces much of the infrastructure necessary to support baking reflection probes from within Bevy as opposed to in an external program such as Blender, which has been the status quo up to this point. Even with this patch, there are still missing pieces needed to make this truly convenient, however:

  1. Baking a reflection probe requires more than just saving a cubemap: it requires pre-filtering the cubemap into diffuse and specular parts in the same way that the glTF IBL Sampler does. This is not yet implemented in Bevy; see GenerateEnvironmentMapLight #9414 for a previous attempt.

  2. The cubemap needs to be saved in .ktx2 format, as that's the only format that Bevy presently knows how to load. There's no comprehensive Rust crate for this, though note that my glTF IBL Sampler UI has code to do it for the specific case of cubemaps.

  3. An editor UI is necessary for convenience, as otherwise every application will have to create some sort of bespoke tool that arranges scenes and saves the reflection cubemaps.

The reflection_probes example has been updated in order to add an option to enable dynamic reflection probes, as well as an option to spin the cubes so that the impact of the dynamic reflection probes is visible. Additionally, the static reflection probe, which was previously rendered in Blender, has been changed to one rendered in Bevy. This results in a change in appearance, as Blender and Bevy render somewhat differently.

Partially addresses #12233.

Changelog

Added

  • An OmnidirectionalCameraBundle has been added in order to render to a cubemap. This allows reflection probes to reflect the dynamic scene.

This commit introduces a new type of camera, the *omnidirectional*
camera. These cameras render to a cubemap texture, and as such extract
into six different cameras in the render world, one for each side. The
cubemap texture can be attached to a reflection probe as usual, which
allows reflections to contain moving objects. To use an omnidirectional
camera, create an [`OmnidirectionalCameraBundle`].

Because omnidirectional cameras extract to six different sub-cameras in
the render world, render world extraction code that targets components
present on cameras now needs to be aware of this fact and extract
components to the individual sub-cameras, not the root camera component.
They also need to run after omnidirectional camera extraction, as only
then will the sub-cameras be present in the render world. New plugins,
`ExtractCameraComponentPlugin` and `ExtractCameraInstancesPlugin`, are
available to assist with this.

Each side of an omnidirectional camera can be individually marked as
active via the `ActiveCubemapSides` bitfield. This allows for the common
technique of rendering only one (or two, or three) sides of the cubemap
per frame, to reduce rendering overhead. It also allows for on-demand
rendering, so that an application that wishes to optimize further can
choose sides to refresh. For example, an application might wish to only
rerender sides whose frusta contain moving entities.

In addition to real-time reflection probes, this patch introduces much
of the infrastructure necessary to support baking reflection probes from
within Bevy as opposed to in an external program such as Blender, which
has been the status quo up to this point. Even with this patch, there
are still missing pieces needed to make this truly convenient, however:

1. Baking a reflection probe requires more than just saving a cubemap:
   it requires pre-filtering the cubemap into diffuse and specular parts
   in the same way that the [glTF IBL Sampler] does. This is not yet
   implemented in Bevy; see bevyengine#9414 for a previous attempt.

2. The cubemap needs to be saved in `.ktx2` format, as that's the only
   format that Bevy presently knows how to load. There's no
   comprehensive Rust crate for this, though note that my [glTF IBL
   Sampler UI] has code to do it for the specific case of cubemaps.

3. An editor UI is necessary for convenience, as otherwise every
   application will have to create some sort of bespoke tool that
   arranges scenes and saves the reflection cubemaps.

The `reflection_probes` example has been updated in order to add an
option to enable dynamic reflection probes, as well as an option to spin
the cubes so that the impact of the dynamic reflection probes is
visible. Additionally, the static reflection probe, which was previously
rendered in Blender, has been changed to one rendered in Bevy. This
results in a change in appearance, as Blender and Bevy render somewhat
differently.

Partially addresses bevyengine#12233.

[glTF IBL Sampler]: https://github.com/KhronosGroup/glTF-IBL-Sampler

[glTF IBL Sampler UI]: https://github.com/pcwalton/gltf-ibl-sampler-egui
@pcwalton pcwalton added A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible labels Jun 14, 2024
@pcwalton pcwalton added this to the 0.15 milestone Jun 14, 2024
@pcwalton pcwalton requested review from IceSentry and robtfm June 14, 2024 03:44
@alice-i-cecile alice-i-cecile added the M-Needs-Release-Note Work that should be called out in the blog due to impact label Jun 14, 2024
@NthTensor NthTensor added the S-Needs-Review Needs reviewer attention (from anyone!) to move forward label Jul 16, 2024
Copy link
Member

@tychedelia tychedelia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not super familiar with the lightprobe code itself, but the camera/view stuff looks good. Just a few non-blocking comments on some random things.

/// *Main textures* are used as intermediate targets for rendering, before
/// postprocessing and tonemapping resolves to the final output.
#[derive(Clone, PartialEq, Eq, Hash)]
struct ViewTargetTextureKey {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, these are a really helpful cleanup.

commands
.get_or_spawn(view_entity)
.insert(render_view_light_probes);
if let (Some(view_render_layers), Some(light_probe_render_layers)) = (
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In other places, we tend to fall back to default layer 0 if view layers are missing, is that intentionally not the case here?

pub clear_color: ClearColorConfig,
pub sorted_camera_index_for_target: usize,
pub exposure: f32,
pub render_target_layer: Option<NonMaxU32>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nbd, but this field wasn't immediately intuitive to me and part of me feels like it should be a direct property of NormalizedRenderTarget::Image or a new NormalizedRenderTarget::Cubemap instead.

@alice-i-cecile alice-i-cecile modified the milestones: 0.15, 0.16 Oct 8, 2024
@BenjaminBrienen BenjaminBrienen added D-Complex Quite challenging from either a design or technical perspective. Ask for help! S-Waiting-on-Author The author needs to make changes or address concerns before this can be merged and removed S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Oct 31, 2024
@alice-i-cecile alice-i-cecile modified the milestones: 0.16, 0.17 Mar 1, 2025
@atlv24 atlv24 modified the milestones: 0.17, 0.18 Jul 8, 2025
github-merge-queue bot pushed a commit that referenced this pull request Jul 23, 2025
# Objective

This PR implements a robust GPU-based pipeline for dynamically
generating environment maps in Bevy. It builds upon PR #19037, allowing
these changes to be evaluated independently from the atmosphere
implementation.

While existing offline tools can process environment maps, generate mip
levels, and calculate specular lighting with importance sampling,
they're limited to static file-based workflows. This PR introduces a
real-time GPU pipeline that dynamically generates complete environment
maps from a single cubemap texture on each frame.

Closes #9380 

## Solution

Implemented a Single Pass Downsampling (SPD) pipeline that processes
textures without pre-existing mip levels or pre-filtered lighting data.

Single Pass Downsampling (SPD) pipeline:
- accepts any square, power-of-two cubemap up to 8192 × 8192 per face
and generates the complete mip chain in one frame;
- copies the base mip (level 0) in a dedicated compute dispatch
(`copy_mip0`) before the down-sampling pass;
- performs the down-sampling itself in two compute dispatches to fit
within subgroup limits;
- heavily inspired by Jasmine's prototype code.

Pre-filtering pipeline:
- generates the specular Radiance Map using bounded-VNDF GGX importance
sampling for higher quality highlights and fewer fireflies;
- computes the diffuse Irradiance Map with cosine-weighted hemisphere
sampling;
- mirrors the forward-/reverse-tonemap workflow used by TAA instead of
exposing a separate *white-point* parameter;
- is based on the resources below together with the “Bounded VNDF
Sampling for Smith-GGX Reflections” paper.

The pre-filtering pipeline is largely based on these articles:
-
https://placeholderart.wordpress.com/2015/07/28/implementation-notes-runtime-environment-map-filtering-for-image-based-lighting/
- https://bruop.github.io/ibl/
-
https://gpuopen.com/download/Bounded_VNDF_Sampling_for_Smith-GGX_Reflections.pdf

> The forward-/reverse-tonemap trick removes almost all fireflies
without the need for a separate white-point parameter.

Previous work: #9414

## Testing

The `reflection_probes.rs` example has been updated:

- The camera starts closer to the spheres so the reflections are easier
to see.
- The GLTF scene is spawned only when the reflection probe mode is
active (press Space).
- The third display mode (toggled with Space) shows the generated
cubemap chain.
- You can change the roughness of the center sphere with the Up/Down
keys.

## Render Graph

Composed of two nodes and a graph edge:
```
Downsampling -> Filtering
```

Pass breakdown:
```
dowsampling_first_pass -> dowsampling_second_pass ->
radiance_map_pass -> irradiance_map_pass
```

<img width="1601" height="2281" alt="render-graph"
src="https://github.com/user-attachments/assets/3c240688-32f7-447a-9ede-6050b77c0bd1"
/>

---

## Showcase
<img width="2564" height="1500" alt="image"
src="https://github.com/user-attachments/assets/56e68dd7-9488-4d35-9bba-7f713a3e2831"
/>


User facing API:
```rust
commands.entity(camera)
    .insert(GeneratedEnvironmentMapLight {
        environment_map: world.load_asset("environment_maps/pisa_specular_rgb9e5_zstd.ktx2"),
        ..default()
    });
```

## Computed Environment Maps
To use fully dynamic environment maps, create a new placeholder image
handle with `Image::new_fill`, extract it to the render world. Then
dispatch a compute shader, bind the image as a 2d array storage texture.
Anything can be rendered to the custom dynamic environment map.
This is already demonstrated in PR #19037 with the `atmosphere.rs`
example.

We can extend this idea further and run the entire PBR pipeline from the
perspective of the light probe, and it is possible to have some form of
global illumination or baked lighting information this way, especially
if we make use of irradiance volumes for the realtime aspect. This
method could very well be extended to bake indirect lighting in the
scene.
#13840 should make this possible!

## Notes for reviewers

This PR no longer bundles any large test textures.

---------

Co-authored-by: atlas <email@atlasdostal.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Rendering Drawing game state to the screen C-Feature A new feature, making something new possible D-Complex Quite challenging from either a design or technical perspective. Ask for help! M-Needs-Release-Note Work that should be called out in the blog due to impact S-Waiting-on-Author The author needs to make changes or address concerns before this can be merged

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

6 participants