JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
Previous Page
Next Page

13.3. Deferred Shading for Volume Shadows

With contributions by Hugh Malan and Mike Weiblen

One of the disadvantages of shadow mapping as discussed in the previous section is that the performance depends on the number of lights in the scene that are capable of casting shadows. With shadow mapping, a rendering pass must be performed for each of these light sources. These shadow maps are utilized in a final rendering pass. All these rendering passes can reduce performance, particularly if a great many polygons are to be rendered.

It is possible to do higher-performance shadow generation with a rendering technique that is part of a general class of techniques known as DEFERRED SHADING. With deferred shading, the idea is to first quickly determine the surfaces that will be visible in the final scene and apply complex and time-consuming shader effects only to the pixels that make up those visible surfaces. In this sense, the shading operations are deferred until it can be established just which pixels contribute to the final image. A very simple and fast shader can render the scene into an offscreen buffer with depth buffering enabled. During this initial pass, the shader stores whatever information is needed to perform the necessary rendering operations in subsequent passes. Subsequent rendering operations are applied only to pixels that are determined to be visible in the high-performance initial pass. This technique ensures that no hardware cycles are wasted performing shading calculations on pixels that will ultimately be hidden.

To render soft shadows with this technique, we need to make two passes. In the first pass, we do two things:

  1. We use a shader to render the geometry of the scene without shadows or lighting into the framebuffer.

  2. We use the same shader to store a normalized camera depth value for each pixel in a separate buffer. (This separate buffer is accessed as a texture in the second pass for the shadow computations.)

In the second pass, the shadows are composited with the existing contents of the framebuffer. To do this compositing operation, we render the shadow volume (i.e., the region in which the light source is occluded) for each shadow casting object. In the case of a sphere, computing the shadow volume is relatively easy. The sphere's shadow is in the shape of a truncated cone, where the apex of the cone is at the light source. One end of the truncated cone is at the center of the sphere (see Figure 13.2). (It is somewhat more complex to compute the shadow volume for an object defined by polygons, but the same principle applies.)

Figure 13.2. The shadow volume for a sphere


We composite shadows with the existing geometry by rendering the polygons that define the shadow volume. This allows our second pass shader to be applied only to regions of the image that might be in shadow.

To draw a shadow, we use the texture map shown in Figure 13.3. This texture map expresses how much a visible surface point is in shadow relative to a shadow-casting object (i.e., how much its value is attenuated) based on a function of two values: 1) the squared distance from the visible surface point to the central axis of the shadow volume, and 2) the distance from the visible surface point to the center of the shadow-casting object. The first value is used as the s coordinate for accessing the shadow texture, and the second value is used as the t coordinate. The net result is that shadows are relatively sharp when the shadow-casting object is very close to the fragment being tested and the edges become softer as the distance increases.

Figure 13.3. A texture map used to generate soft shadows


In the second pass of the algorithm, we do the following:

  1. Draw the polygons that define the shadow volume. Only the fragments that could possibly be in shadow are accessed during this rendering operation.

  2. For each fragment rendered,

    1. Look up the camera depth value for the fragment as computed in the first pass.

    2. Calculate the coordinates of the visible surface point in the local space of the shadow volume. In this space, the z axis is the axis of the shadow volume and the origin is at the center of the shadow-casting object. The x component of this coordinate corresponds to the distance from the center of the shadow-casting object and is used directly as the second coordinate for the shadow texture lookup.

    3. Compute the squared distance between the visible surface point and the z axis of the shadow volume. This value becomes the first coordinate for the texture lookup.

    4. Access the shadow texture by using the computed index values to retrieve the light attenuation factor and store this in the output fragment's alpha value. The red, green, and blue components of the output fragment color are each set to 0.

    5. Compute for the fragment the light attenuation factor that will properly darken the existing framebuffer value. For the computation, enable fixed functionality blending, set the blend mode source function to GL_SRC_ALPHA, and set the blend destination function to GL_ONE.

Because the shadow (second pass) shader is effectively a 2D compositing operation, the texel it reads from the depth texture must exactly match the pixel in the framebuffer it affects. So the texture coordinate and other quantities must be bilinearly interpolated without perspective correction. We interpolate by ensuring that w is constant across the polygondividing x, y, and z by w and then setting w to 1.0 does the job. Another issue is that when the viewer is inside the shadow volume, all faces are culled. We handle this special case by drawing a screen-sized quadrilateral since the shadow volume would cover the entire scene.

13.3.1. Shaders for First Pass

The shaders for the first pass of the volume shadow algorithm are shown in Listings 13.8 and 13.9. In the vertex shader, to accomplish the standard rendering of the geometry (which in this specific case is all texture mapped), we just call ftransform and pass along the texture coordinate. The other lines of code compute the normalized value for the depth from the vertex to the camera plane. The computed value, CameraDepth, is stored in a varying variable so that it can be interpolated and made available to the fragment shader.

To render into two buffers by using a fragment shader, the application must call glDrawBuffers and pass it a pointer to an array containing symbolic constants that define the two buffers to be written. In this case, we might pass the symbolic constant GL_BACK_LEFT as the first value in the array and GL_AUX0 as the second value. This means that gl_FragData[0] will be used to update the value in the soon-to-be-visible framebuffer (assuming we are double-buffering) and the value for gl_FragData[1] will be used to update the value in auxiliary buffer number 0. Thus, the fragment shader for the first pass of our algorithm contains just two lines of code (Listing 13.9).

Listing 13.8. Vertex shader for first pass of soft volume shadow algorithm

uniform vec3  CameraPos;
uniform vec3  CameraDir;
uniform float DepthNear;
uniform float DepthFar;

varying float CameraDepth;  // normalized camera depth
varying vec2 TexCoord;

void main()
{
    // offset = vector to vertex from camera's position
    vec3 offset = (gl_Vertex.xyz / gl_Vertex.w) - CameraPos;

    // z = distance from vertex to camera plane
    float z = -dot(offset, CameraDir);

    // Depth from vertex to camera, mapped to [0,1]
    CameraDepth = (z - DepthNear) / (DepthFar - DepthNear);

    // typical interpolated coordinate for texture lookup
    TexCoord = gl_MultiTexCoord0.xy;

    gl_Position = ftransform();
}

Listing 13.9. Fragment shader for first pass of soft volume shadow algorithm

uniform sampler2D TextureMap;

varying float CameraDepth;
varying vec2  TexCoord;

void main()
{
    // draw the typical textured output to visual framebuffer
    gl_FragData[0] = texture2D(TextureMap, TexCoord);

    // write "normaliized vertex depth" to the depthmap's alpha.
    gl_FragData[1] = vec4(vec3(0.0), CameraDepth);
}

13.3.2. Shaders for Second Pass

The second pass of our shadow algorithm is responsible for compositing shadow information on top of what has already been rendered. After the first pass has been completed, the application must arrange for the depth information rendered into auxiliary buffer 0 to be made accessible for use as a texture. There are several ways we can accomplish this. One way is to set the current read buffer to auxiliary buffer 0 by calling glReadBuffer with the symbolic constant GL_AUX0, and then call glCopyTexImage2d to copy the values from auxiliary buffer 0 to a texture that can be accessed in the second pass of the algorithm. (A higher performance method that avoids an actual data copy is possible if the EXT_framebuffer_objects extension is used. This extension is expected to be promoted to the OpenGL core in OpenGL 2.1.)

In the second pass, the only polygons rendered are the ones that define the shadow volumes for the various objects in the scene. We enable blending by calling glEnable with the symbolic constant GL_BLEND, and we set the blend function by calling glBlendFunc with a source factor of GL_ONE and a destination factor of GL_SRC_ALPHA. The fragment shader outputs the shadow color and an alpha value obtained from a texture lookup operation. This alpha value blends the shadow color value into the frame buffer.

The vertex shader for the second pass (see Listing 13.10) is responsible for computing the coordinates for accessing the depth values that were computed in the first pass. We accomplish the computation by transforming the incoming vertex position, dividing the x, y, and z components by the w component, and then scaling and biasing the x and y components to transform them from the range [1,1] into the range [0,1]. Values for ShadowNear and ShadowDir are also computed. These are used in the fragment shader to compute the position of the fragment relative to the shadow-casting object.

Listing 13.10. Vertex shader for second pass of soft volume shadow algorithm

uniform mat3 WorldToShadow;
uniform vec3 SphereOrigin;

uniform vec3 CameraPos;
uniform vec3 CameraDir;
uniform float DepthNear;
uniform float DepthFar;

varying vec2 DepthTexCoord;
varying vec3 ShadowNear;
varying vec3 ShadowDir;

void main()
{
    vec4 tmp1 = ftransform();
    gl_Position = tmp1;

    // Predivide out w to avoid perspective-correct interpolation.
    // The quantities being interpolated are screen-space texture
    // coordinates and vectors to the near and far shadow plane,
    // all of which have to be bilinearly interpolated.
    // This could potentially be done by setting glHint,
    // but it wouldn't be guaranteed to work on all hardware.

    gl_Position.xyz /= gl_Position.w;
    gl_Position.w = 1.0;

    // Grab the transformed vertex's XY components as a texcoord
    // for sampling from the depth texture from pass 1.
    // Normalize them from [0,0] to [1,1]

    DepthTexCoord = gl_Position.xy * 0.5 + 0.5;

    // offset = vector to vertex from camera's position
    vec3 offset = (gl_Vertex.xyz / gl_Vertex.w) - CameraPos;

    // z = distance from vertex to camera plane
    float z = -dot(offset, CameraDir);

    vec3 shadowOffsetNear = offset * DepthNear / z;
    vec3 shadowOffsetFar  = offset * DepthFar / z;

    vec3 worldPositionNear = CameraPos + shadowOffsetNear;
    vec3 worldPositionFar  = CameraPos + shadowOffsetFar;

    vec3 shadowFar  = WorldToShadow * (worldPositionFar - SphereOrigin);
    ShadowNear = WorldToShadow * (worldPositionNear - SphereOrigin);
    ShadowDir = shadowFar - ShadowNear;
}

The fragment shader for the second pass is shown in Listing 13.11. In this shader, we access the cameraDepth value computed by the first pass by performing a texture lookup. We then map the fragment's position into the local space of the shadow volume. The mapping from world to shadow space is set up so that the center of the occluding sphere maps to the origin, and the circle of points on the sphere at the terminator between light and shadow maps to a circle in the YZ plane.

The variables d and l are respectively the distance along the shadow axis and the squared distance from it. These values are used as texture coordinates for the lookup into the texture map defining the shape of the shadow.

With the mapping described above, points on the terminator map to a circle in the YZ plane. The texture map has been painted with the transition from light to shadow occurring at s=0.5; to match this, the mapping from world to shadow is set up so that the terminator circle maps to a radius of sqrt(0.5).

Finally, the value retrieved from the shadow texture is used as the alpha value for blending the shadow color with the geometry that has already been rendered into the frame buffer.

Listing 13.11. Fragment shader for second pass of soft volume shadow algorithm

uniform sampler2D DepthTexture;
uniform sampler2D ShadowTexture;

varying vec2 DepthTexCoord;
varying vec3 ShadowNear;
varying vec3 ShadowDir;

const vec3 shadowColor = vec3(0.0);

void main()
{
    // read from DepthTexture
    // (depth is stored in texture's alpha component)
    float cameraDepth = texture2D(DepthTexture, DepthTexCoord).a;

    vec3 shadowPos = (cameraDepth * ShadowDir) + ShadowNear;
    float l = dot(shadowPos.yz, shadowPos.yz);
    float d = shadowPos.x;

    // k = shadow density: 0=opaque, 1=transparent
    // (use texture's red component as the density)
    float k = texture2D(ShadowTexture, vec2(l, d)).r;

    gl_FragColor = vec4(shadowColor, k);
}

Figure 13.4 shows the result of this multipass shading algorithm in a scene with several spheres. Note how the shadows for the four small spheres get progressively softer edges as the spheres increase in distance from the checkered floor. The large sphere that is farthest from the floor casts an especially soft shadow.

Figure 13.4. Screen shot of the volume shadows shader in action. Notice that spheres that are farther from the surface have shadows with softer edges.


The interesting part of this deferred shading approach is that the volumetric effects are implemented by rendering geometry that bounds the volume of the effect. This almost certainly means processing fewer vertices and fewer fragments. The shaders required are relatively simple and quite fast. Instead of rendering the geometry once for each light source, the geometry is rendered just once, and all the shadow volumes due to all light sources can be rendered in a single compositing pass. Localized effects such as shadow maps, decals, and projective textures can be accomplished easily. Instead of having to write tricky code to figure out the subset of the geometry to which the effect applies, you write a shader that is applied to each pixel and use that shader to render geometry that bounds the effect. This technique can be extended to render a variety of different effectsvolumetric fog, lighting, and improved caustics to name a few.


Previous Page
Next Page




JavaScript EditorAjax Editor     JavaScript Editor