JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
Previous Page
Next Page

9.1. Transformation

The features of the OpenGL Shading Language make it very easy to express transformations between the coordinate spaces defined by OpenGL. We've already seen the transformation that will be used by almost every vertex shader. The incoming vertex position must be transformed into clipping coordinates for use by the fixed functionality stages that occur after vertex processing. This is done in one of two ways, either this:

// Transform vertex to clip space
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;

or this:

gl_Position = ftransform();

The only difference between these two methods is that the second case is guaranteed to compute the transformed position in exactly the same way as the fixed functionality method. Some implementations may have different hardware paths that result in small differences between the transformed position as computed by the first method and as computed by fixed functionality. This can cause problems in rendering if a multipass algorithm is used to render the same geometry more than once. In this case, the second method is preferred because it produces the same transformed position as the fixed functionality.

OpenGL specifies that light positions are transformed by the modelview matrix when they are provided to OpenGL. This means that they are stored in eye coordinates. It is often convenient to perform lighting computations in eye space, so it is often necessary to transform the incoming vertex position into eye coordinates as shown in Listing 9.1.

Listing 9.1. Computation of eye coordinate position

vec4 ecPosition;
vec3 ecPosition3;    // in 3 space

// Transform vertex to eye coordinates
if (NeedEyePosition)
{
    ecPosition  = gl_ModelViewMatrix * gl_Vertex;
    ecPosition3 = (vec3(ecPosition)) / ecPosition.w;
}

This snippet of code computes the homogeneous point in eye space (a vec4) as well as the nonhomogeneous point (a vec3). Both values are useful as we shall see.

To perform lighting calculations in eye space, incoming surface normals must also be transformed. A built-in uniform variable is available to access the normal transformation matrix, as shown in Listing 9.2.

Listing 9.2. Transformation of normal

normal = gl_NormalMatrix * gl_Normal;

In many cases, the application may not know anything about the characteristics of the surface normals that are being provided. For the lighting computations to work correctly, each incoming normal must be normalized so that it is unit length. For OpenGL fixed functionality, normalization is a mode in OpenGL that we can control by providing the symbolic constant GL_NORMALIZE to glEnable or glDisable. In an OpenGL shader, if normalization is required, we do it as shown in Listing 9.3.

Listing 9.3. Normalization of normal

normal = normalize(normal);

Sometimes an application will always be sending normals that are unit length and the modelview matrix is always one that does uniform scaling. In this case, rescaling can be used to avoid the possibly expensive square root operation that is a necessary part of normalization. If the rescaling factor is supplied by the application through the OpenGL API, the normal can be rescaled as shown in Listing 9.4.

Listing 9.4. Normal rescaling

normal = normal * gl_NormalScale;

The rescaling factor is stored as state within OpenGL and can be accessed from within a shader by the built-in uniform variable gl_NormalScale.

Texture coordinates can also be transformed. A texture matrix is defined for each texture coordinate set in OpenGL and can be accessed with the built-in uniform matrix array variable gl_TextureMatrix. Incoming texture coordinates can be transformed in the same manner as vertex positions, as shown in Listing 9.5.

Listing 9.5. Texture coordinate transformation

gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;


Previous Page
Next Page



R7
JavaScript EditorAjax Editor     JavaScript Editor