JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
Previous Page
Next Page

13.1. Ambient Occlusion

The lighting model built into OpenGL that we explored in Chapter 9 has a simple assumption for ambient light, namely, that it is constant over the entire scene. But if you look around your current location, you will probably see that this is almost always a bad assumption as far as generating a realistic image is concerned. Look underneath the table, and you will see an area that is darker than other areas. Look at the stack of books on the table, and you will see that there are dark areas between the books and probably a darker area near the base of the books. Almost every scene in real life contains a variety of complex shadows.

The alternative lighting models described in the previous chapter are an improvement over OpenGL's fixed function lighting model, but they still fall short. If you look carefully at Color Plate 21D, you can see that the lighting still does not look completely realistic. The area under the chin, the junctions of the legs and the boots, and the creases in the clothing are brightly lit. If this were a real scene, we know that these areas would be darker because they would be obscured by nearby parts of the model. Hence, this object looks fake.

A relatively simple way to add realism to computer-generated imagery is a technique called AMBIENT OCCLUSION. This technique uses a precomputed occlusion (or accessibility) factor to scale the calculated direct diffuse illumination factor. It can be used with a variety of illumination methods, including hemisphere lighting and image-based lighting, as discussed in the preceding chapter. It results in soft shadows that darken parts of the object that are only partially accessible to the overall illumination in a scene.

The basic idea with ambient occlusion is to determine, for each point on an object, how much of the potentially visible hemisphere is actually visible and how much is obstructed by nearby parts of the object. The hemisphere that is considered at each point on the surface is in the direction of the surface normal at that point. For instance, consider the venerable teapot in Figure 13.1. The top of the knob on the teapot's lid receives illumination from the entire visible hemisphere. But a point partway down inside the teapot's spout receives illumination only from a very small percentage of the visible hemisphere, in the direction of the small opening at the end of the spout.

Figure 13.1. A 2D representation of the process of computing occlusion (accessibility) factors. A point on the top of the knob on the teapot's lid has nothing in the way of the visible hemisphere (accessibility = 1.0) while a point inside the spout has its visible hemisphere mostly obscured (accessibility nearer to 0).


For a specific model we can precompute these occlusion factors and save them as per-vertex attribute values. Alternatively, we can create a texture map that stores these values for each point on the surface. One method for computing occlusion factors is to cast a large number of rays from each vertex and keep track of how many intersect another part of the object and how many do not. The percentage of such rays that are unblocked is the accessibility factor. The top of the lid on the teapot has a value of 1 since no other part of the model blocks its view of the visible hemisphere. A point partway down the spout has an accessibility value near 0, because its visible hemisphere is almost completely obscured.

We then multiply the computed accessibility (or occlusion) factor by our computed diffuse reflection value. This has the effect of darkening areas that are obscured by other parts of the model. It is simple enough to use this value in conjunction with our other lighting models. For instance, the hemisphere lighting vertex shader that we developed in Section 12.1 can incorporate ambient occlusion with a few simple changes, as shown in Listing 13.1.

Listing 13.1. Vertex shader for hemisphere lighting with ambient occlusion

uniform vec3 LightPosition;
uniform vec3 SkyColor;
uniform vec3 GroundColor;

attribute float Accessibility;

varying vec3  DiffuseColor;

void main()
{
    vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex);
    vec3 tnorm      = normalize(gl_NormalMatrix * gl_Normal);
    vec3 lightVec   = normalize(LightPosition - ecPosition);
    float costheta  = dot(tnorm, lightVec);
    float a         = 0.5 + 0.5 * costheta;

    DiffuseColor = mix(GroundColor, SkyColor, a) * Accessibility;

    gl_Position     = ftransform();
}

The only change made to this shader is to pass in the accessibility factor as an attribute variable and use this to attenuate the computed diffuse color value. The results are quite a bit more realistic, as you can see by comparing Color Plate 21D and G. The overall appearance is too dark, but this can be remedied by choosing a mid-gray for the ground color rather than black. Color Plate 21F shows ambient occlusion with a simple diffuse lighting model.

The same thing can be done to the image-based lighting shader that we developed in Section 12.2 (see Listing 13.2) and to the spherical harmonic lighting shader that we developed in Section 12.3 (see Listing 13.3). In the former case, the lighting is done in the fragment shader, so the per-vertex accessibility factor must be passed to the fragment shader as a varying variable. (Alternatively, the accessibility values could have been stored in a texture that could be accessed in the fragment shader.)

Listing 13.2. Fragment shader for image-based lighting

uniform vec3  BaseColor;
uniform float SpecularPercent;
uniform float DiffusePercent;

uniform samplerCube SpecularEnvMap;
uniform samplerCube DiffuseEnvMap;

varying vec3  ReflectDir;
varying vec3  Normal;
varying float Accessibility;

void main()
{
    // Look up environment map values in cube maps

    vec3 diffuseColor =
        vec3(textureCube(DiffuseEnvMap,  normalize(Normal)));

    vec3 specularColor =
        vec3(textureCube(SpecularEnvMap, normalize(ReflectDir)));

    // Add lighting to base color and mix

    vec3 color = mix(BaseColor, diffuseColor*BaseColor, DiffusePercent);
    color     *= Accessibility;
    color      = mix(color, specularColor + color, SpecularPercent);

    gl_FragColor = vec4(envColor, 1.0);
}

Listing 13.3. Vertex shader for spherical harmonics lighting

varying vec3    DiffuseColor;
uniform float   ScaleFactor;
attribute float Accessibility;

const float C1 = 0.429043;
const float C2 = 0.511664;
const float C3 = 0.743125;
const float C4 = 0.886227;
const float C5 = 0.247708;

// Constants for Old Town Square lighting
const vec3 L00  = vec3( 0.871297,  0.875222,  0.864470);
const vec3 L1m1 = vec3( 0.175058,  0.245335,  0.312891);
const vec3 L10  = vec3( 0.034675,  0.036107,  0.037362);
const vec3 L11  = vec3(-0.004629, -0.029448, -0.048028);
const vec3 L2m2 = vec3(-0.120535, -0.121160, -0.117507);
const vec3 L2m1 = vec3( 0.003242,  0.003624,  0.007511);
const vec3 L20  = vec3(-0.028667, -0.024926, -0.020998);
const vec3 L21  = vec3(-0.077539, -0.086325, -0.091591);
const vec3 L22  = vec3(-0.161784, -0.191783, -0.219152);

void main()
{
    vec3 tnorm    = normalize(gl_NormalMatrix * gl_Normal);

    DiffuseColor =  C1 * L22 * (tnorm.x * tnorm.x - tnorm.y * tnorm.y) +
                    C3 * L20 * tnorm.z * tnorm.z +
                    C4 * L00 -
                    C5 * L20 +
                    2.0 * C1 * L2m2 * tnorm.x * tnorm.y +
                    2.0 * C1 * L21 * tnorm.x * tnorm.z +
                    2.0 * C1 * L2m1 * tnorm.y * tnorm.z +
                    2.0 * C2 * L11 * tnorm.x +
                    2.0 * C2 * L1m1 * tnorm.y +
                    2.0 * C2 * L10 * tnorm.z;

    DiffuseColor *= ScaleFactor;
    DiffuseColor *= Accessibility;

    gl_Position   = ftransform();
}

Results for ambient occlusion shaders are shown in Color Plate 21 C, F, G, H, and I. These images come from a GLSL demo program called deLight, written by Philip Rideout. Philip also wrote the ray-tracer that generated per-vertex accessibility information for a number of different models.

Ambient occlusion is a view-independent technique, but the computation of the occlusion factors assumes that the object is rigid. If the object has moving parts, the occlusion factors would need to be recomputed for each position. Work has been done recently on methods for computing the occlusion factors in real time (see Dynamic Ambient Occlusion and Indirect Lighting by Michael Bunnell in the book GPU Gems 2).

During the preprocessing stage, we can also compute an attribute called a BENT NORMAL. We obtain this value by averaging all the nonoccluded rays from a point on a surface. It represents the average direction of the available light arriving at that particular point on the surface. Instead of using the surface normal to access an environment map, we use the bent normal to obtain the color of the light from the appropriate portion of the environment map. We can simulate a soft fill light with a standard point or spotlight by using the bent normal instead of the surface normal and then multiplying the result by the occlusion factor.

Occlusion factors are not only useful for lighting but are also useful for reducing reflections from the environment in areas that are occluded. Hayden Landis of Industrial Light & Magic has described how similar techniques have been used to control reflections in films such as Pearl Harbor and Jurassic Park III. The technique is modified still further to take into account the type of surface that is reflecting the environment. Additional rays used along the main reflection vector provide an average (blurred) reflection. For diffuse surfaces (e.g., rubber), the additional rays are spread out more widely from the main reflection vector so that the reflection appears more diffuse. For more specular surfaces, the additional rays are nearer the main reflection vector, so the reflection is more mirrorlike.


Previous Page
Next Page




JavaScript EditorAjax Editor     JavaScript Editor