Skip to content
Jan Vogelsang edited this page Sep 1, 2022 · 13 revisions

Current implementation

Next steps

1. VR-support

The engine offers built-in VR support that we utilize in the project. It is based on the OpenXR standard, which allows writing generalized code for all head mounted displays (HMDs).

2. Optimization

The code still has to be benchmarked and profiled. Measuring the performance will show whether the code still needs to be optimized further (e.g. the raymarching) or not.

3. Validation

Especially of the smoke, as we try to emulate realistic situations where visibility (and therefore also darkness) play a major role.

4. Dynamic Lighting

Due to the nature of the implementation, the engine's lighting is completely separate from the smoke visualization. The light sources do not influence the smoke and vice versa, which results in rooms full of smoke still being as bright as without any smoke. Light sources therefore should be dimmed depending on the amount of smoke that covers them. This could be done in a relatively simple fashion, by just probing the smoke volumes from the light source in some fixed directions and pre-calculating the static change of brightness every frame to dim the light over time.

5. Visualize flame

There are built-in ways to visualize fire/flames in the engine, so we just have to extract information about the heat source by cutting off the heat-release-rate at a specific threshold and use the remaining cells as indication of flames.

6. VR-UI

The UI in the VR has to be fully accessible with VR controllers and should not be a simple HUD. Instead, there have to be other options to control the simulation. There is a basic VR UI implemented already, but the functionality does not yet cover everything of the regular UI.

7. Deployment

This includes CI/CD pipelines to run automated tests as well as static code analysis. We will also have to check for compatibility with different systems (Windows, Unix, Mac, etc.), GPUs (Amd, Nvidia, etc.) and HMDs (Oculus, Vive, etc.).