-
Notifications
You must be signed in to change notification settings - Fork 1
Features Walkthrough
This page presents the visible elements of the demo, and also summarizes their implementation.
As currently configured (nColumns
and nRows
in configFiles/geometry/asteroid.txt
), each sphere is made up of 400 rectangular sections and 40 triangular sections (there are 20 triangles forming each of the north and south poles). A texture has been mapped to the vertices of the mesh, and is being lit by a single point source light using the Phong reflection model
Textured geometry was achieved through the coordination of two systems: The first is the abstract Texture
class, which is responsible for binding textures to the pipeline and creating and binding texture
sampler objects. The Texture
class is inherited by TextureFromDDS
, which uses the DirectX
Toolkit’s DDS texture loader to create textures and shader resource views for the textures.
The second system supporting textured geometry is the set of classes that inherit from
IGeometryRenderer
and encapsulate shader and constant buffer setup and updating, as well as
pipeline execution. The Shader
class, which is used by most derived classes of
IGeometryRenderer
to create and bind shaders, accepts filepaths of HLSL source code from
configuration data. Therefore, creating texture-aware shaders is a matter of configuring Shader
class instances to compile HLSL shader files that make use of textures.
The closest object seen when the demo launches is a row of rectangles, although each rectangle is composed of a mesh of smaller rectangles, where each small rectangle is subdivided into two triangles. The lower right corner of each mesh is bending back and forth.
Each mesh has a slightly different appearance. Specifically, they appear as follows, from left to right:
- Mesh coloured with a basic colour gradient, and not affected by lighting. One triangle out of every pair is culled due to an inverted vertex winding order, meaning that the mesh appears to be an array of coloured triangles and triangular holes. The colour gradient is actually a visualization of the texture mapping coordinates assigned to the mesh vertices, which range from
(0, 0, 0)
(top left corner of the mesh) to(1, 1, 0)
(bottom right corner of the mesh). - Same as (1), but all triangles are visible, so there are no holes.
- Same as (1), but the mesh is affected by lighting from a single point light source placed behind the camera. It is possible to see the Phong lighting effect in the screenshot, by comparing this mesh with the first mesh.
- Same as (3), but without holes.
- Same as (1), but a the mesh has been given a texture from an image file.
- Same as (5), but without holes.
- A mesh with both an image texture and Phong lighting from a single point light source behind the camera. Every second triangle in the mesh is culled. Take note of the specular reflection forming a bright curve on the mesh from the top left corner to the bottom right corner.
- Same as (7), but without holes. A specular reflection similar to that on (7) is visible on this mesh in the screenshot.
Note that the meshes without holes do not have a back side (or, more precisely, their back faces are culled). As a result, the meshes are "invisible" when seen from the back.
The row of meshes demonstrates two important aspects of the demo code:
- The system can render objects in the same scene in different ways, in this case, with or without lighting
- Objects can be animated (change shape from frame to frame).
A "rendering manager" allows individual objects to be rendered differently from each other. Specifically, the GeometryRendererManager
class initializes and stores several IGeometryRenderer
objects, each of which encapsulates a rendering pipeline configuration. A pipeline configuration consists of a set of shader programs and constant buffers which allow the system to produce an image from a set of vertices.
To draw an object, the program passes a GeometryRendererManager
instance to the object being drawn. The object asks the GeometryRendererManager
instance to draw it using the appropriate IGeometryRenderer
instance. So, for example, objects unaffected by lighting are not being rendered by lighting-aware shaders with lighting turned off, but are rendered using shaders that are not even capable of lighting.
Linear vertex skinning is a generalization of world transformations in which each vertex is associated with a weighted average of world transformations as opposed to a single world transformation. It allows for animating geometry such that the independent motion of model control points causes smooth warping and stretching of the adjacent surfaces. Moreover, as done in the project, it is possible to achieve this effect by only sending the updated transformations of control points to the graphics card each frame, rather than applying vertex skinning using the CPU and sending the positions of all vertices to the graphics card each frame.
Linear vertex skinning is supported by all model classes in the project which inherit from
SkinnedColorGeometry
. SkinnedColorGeometry
stores a list of control point transformations in
structured buffers that it updates and binds to the pipeline each frame for use in the shader
programs shaderCode/skinnedColorVS_phongLight.hlsl
and
shaderCode/skinnedColorVS_noLight.hlsl
.
There are four different particle systems in the demo
- A continuous stream of particles along a curved path, specifically a cubic Bezier spline, between two points. (Reddish-white curve in the top-right corner of the screenshot)
- An effect resembling ball lightning, which is a flurry of particles moving along a cubic Bezier spline between two points. While travelling along its path, the particle system fades in and out of existence due to changes in transparency. (Green circular smudge just to the left of the first particle system's left endpoint)
- A conical blast of particles (Purple cone near the center of the screenshot)
- A spherical burst of particles, in which particles are arranged in a grid-like pattern that grows outwards and fades, looking somewhat like a firework explosion. (White bursts in the lower-left corner and on the right half of the screenshot)
All particle effects make use of textured billboards (created by the geometry shader) for displaying individual particles.
Particle systems are treated much the same way as other models, in that they also inherit
from the IGeometry
class, and are rendered using instances of classes derived from IGeometryRenderer
.
There is generally one class derived from IGeometry
for each particle generation algorithm (i.e.
the process which produces the initial state of a particle system). The vertex format defining a particle was made quite general, such that very different
particle systems can be created by changing only the configuration data used by a particle
generator. For instance, the ball lightning (2) and cone (3) effects are both created as instances
of the same class, RandomBurstCone
.
Similarly, the shaders used to render particle systems (the HLSL files in the
shaderCode/particle
directory) allow for handling fairly generic particle behaviour consisting
either of motion with respect to a spline, or linear motion starting from a point. All particles are
capable of rotation in the plane perpendicular to the view direction, and the direction of rotation
is reversed depending on whether the particle is moving towards or away from the viewer.
Lastly, particles fade out and are reborn once they have faded beyond their individual thresholds,
although the spherical explosion (4) and conical explosion (3) particle systems are not rendered for long enough to allow for particle rebirth.
The demo code allows for spline positions to be evaluated by both the CPU, using the
eval()
functions of the Spline
class, and the GPU, using the control points of a spline stored in a
structured buffer resource (see shaderCode/particle/splineParticlesVS.hlsl
). CPU-side spline
evaluation is used by the ball lightning particle effect to set the position and orientation of the
particle system, whereas GPU-side spline evaluation is used to set the positions of the particles in
the tubular/stream particle effect. In both cases, the first derivative of the spline was computed to provide
an orientation in addition to a position. Orientational information is not readily apparent in the ball lightning effect, due
to its erratic internal motion, but is more obvious in the stream effect, as it is used as an
axis for the rotation of particles around the spline, in a plane perpendicular to the spline.
The key classes involved in spline trajectories are as follows:
- The
Transformable
classes provide position and orientation information for spline control points. - The
Knot
class hierarchy is responsible for converting location information fromTransformable
objects into spline control points and C1-continuous knots. Knots can be either static, in which case, their associated control points do not change, or dynamic, in which case their control points can be animated. The stream particle effect uses dynamic knots, which causes it to vibrate (subtly). - The
Spline
class hierarchy is responsible for ordering knots, outputting spline control points for GPU-side spline evaluation, and calculating spline positions and orientations when given spline parameter values for CPU-side spline evaluation. - The
HomingTransformable
class makes use of theSpline
class to implement a world transformation that tracks along a spline trajectory. It is the basis for the motion of the ball lightning particle system.
Screen-space effects are image processing algorithms applied to the rendered scene (from a first rendering pass) before it is displayed on screen. Therefore, the final image is produced using two rendering passes.
There are two screen-space special effects in the demo. When the project first launches, the motion blur effect is active. Pressing o
will turn off all effects, and pressing it a second time will activate a ripple distortion effect, as described in the demo's README file.
To produce a motion blur effect, the program sets the current
frame during a second rendering pass to be a weighted average of the past frame and the current
frame obtained from the first rendering pass (refer to shaderCode/ssse/ssseSmearPS.hlsl
for the details of the algorithm). Applying the effect involves three textures: the past frame, the first-pass
render target, and the second-pass render target. The resource copying method of the Direct3D
device context is used to update the past frame texture once the effect has been applied during
the second rendering pass.
The center of the ripple, which moves outwards over time, is located under the mouse cursor. Moving the mouse will move the ripple around the screen.
In the above screenshot, the center of the ripple coincides with the bright spot in the flow of particles in the upper-left. The ripple effect makes particles in the flow appear compressed in the outer area of increasing distortion, and elongated in the inner area of decreasing distortion.
The shockwave/ripple is a triangle-shaped piecewise linear function that maps an
input distance (relative to the origin of the shockwave) to an output amount of scaling applied to
texture sampling coordinates (see shaderCode/ssse/ssseRipplePS.hlsl
).