JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
Previous Page
Next Page

17.4. Antialiased Stripe Example

Aliasing does not occur until we attempt to represent a continuous image in screen space. This conversion occurs during rasterization; therefore, our attempts to mitigate its effects always occur in the fragment shader. The OpenGL Shading Language has several functions for this purpose that are available only to fragment shaders. To help explain the motivation for some of the language facilities for filter estimation, we develop a "worst case" scenarioalternating black and white stripes drawn on a sphere. Developing a fragment shader that performs antialiasing enables us to further illustrate the aliasing problem and the methods for reducing aliasing artifacts. Bert Freudenberg developed the first version of the GLSL shaders discussed in this section during the process of creating the antialiased hatching shader described in Section 18.1.

17.4.1. Generating Stripes

The antialiasing fragment shader determines whether each fragment is to be drawn as white or black to create lines on the surface of an object. The first step is to determine the method to be used for drawing lines. We use a single parameter as the basis for our stripe pattern. For illustration, let's assume that the parameter is the s coordinate of the object's texture coordinate. We have the vertex shader pass this value to us as a floating-point varying variable named V, eventually giving us a method for creating vertical stripes on a sphere. Figure 17.3 (A) shows the result of using the s texture coordinate directly as the intensity (grayscale) value on the surface of the sphere. The viewing position is slightly above the sphere, so we are looking down at the "north pole." The s texture coordinate starts off at 0 (black) and increases to 1 (white) as it goes around the sphere. The edge where black meets white can be seen at the pole, and it runs down the back side of the sphere. The front side of the sphere looks mostly gray, but increases from left to right.

Figure 17.3. Using the s texture coordinate to create stripes on a sphere. In (A), the s texture coordinate is used directly as the intensity (gray) value. In (B), a modulus function creates a sawtooth function. In (C), the absolute value function turns the sawtooth function into a triangle function. (Courtesy of Bert Freudenberg, University of Magdeburg, 2002)


We create a sawtooth wave by multiplying the s texture coordinate by 16 and taking the fractional part (see Figure 17.3 (B)). This causes the intensity value to start at 0, rise quickly to 1, and then drop back down to 0. (To get a feel for what a sawtooth wave looks like, see the illustrations for the built-in functions fract (refer to Figure 5.6) and mod (refer to Figure 5.7)). This sequence is repeated 16 times. The OpenGL shader code to implement this is

float sawtooth = fract(V * 16.0);

This isn't quite the stripe pattern we're after. To get closer, we employ the absolute value function (see Figure 17.3 (C)). By multiplying the value of sawtooth by 2 and subtracting 1, we get a function that varies from [1,1]. Taking the absolute value of this function results in a function that goes from 1 down to 0 and then back to 1 (i.e., a triangle wave). The line of code to do this is

float triangle = abs(2.0 * sawtooth - 1.0);

A stripe pattern is starting to appear, but either it's too blurry or our glasses need adjustment. We make the stripes pure black and white by using the step function. When we compare our triangle variable to 0.5, this function returns 0 whenever triangle is less than or equal to 0.5, and 1 whenever triangle is greater than 0.5. This could be written as

float square = step(0.5, triangle);

This effectively produces a square wave, and the result is illustrated in Figure 17.4 (A). We can modify the relative size of the alternating stripes by adjusting the threshold value provided in the step function.

Figure 17.4. Antialiasing the stripe pattern. We can see that the square wave produced by the step function produces aliasing artifacts (A). The smoothstep function with a fixed-width filter produces too much blurring near the equator but not enough at the pole (B). An adaptive approach provides reasonable antialiasing in both regions (C). (Courtesy of Bert Freudenberg, University of Magdeburg, 2002)


17.4.2. Analytic Prefiltering

In Figure 17.4 (A), we see that the stripes are now distinct, but aliasing has reared its ugly head. The step function returns values that are either 0 or 1, with nothing in between, so the jagged edges in the transitions between white and black are easy to spot. They will not go away if we increase the resolution of the image; they'll just be smaller. The problem is caused by the fact that the step function introduced an immediate transition from white to black or an edge with infinite frequency (see Figure 5.11). There is no way to sample this transition at a high enough frequency to eliminate the aliasing artifacts. To get good results, we need to take steps within our shader to remove such high frequencies.

A variety of antialiasing techniques rely on eliminating extremely high frequencies before sampling. This is called LOW-PASS FILTERING because low frequencies are passed through unmodified, whereas high frequencies are eliminated. The visual effect of low-pass filtering is that the resulting image is blurred.

To eliminate the high frequencies from the stripe pattern, we use the smoothstep function. We know that this function produces a smooth transition between white and black. It requires that we specify two edges, and a smooth transition occurs between those two edges. Figure 17.4 (B) illustrates the result from the following line of code:

float square = smoothstep(0.4, 0.6, triangle);

17.4.3. Adaptive Analytic Prefiltering

Analytic prefiltering produces acceptable results in some regions of the sphere but not in others. The size of the smoothing filter (0.2) is defined in parameter space. But the parameter does not vary at a constant rate in screen space. In this case, the s texture coordinate varies quite rapidly in screen space near the poles and less rapidly at the equator. Our fixed-width filter produces blurring across several pixels at the equator and very little effect at the poles. What we need is a way to determine the size of the smoothing filter adaptively so that transition can be appropriate at all scales in screen space. This requires a measurement of how rapidly the function we're interested in is changing at a particular position in screen space.

Fortunately, the OpenGL Shading Language provides a built-in function that can give us the rate of change (derivative) of any parameter in screen space. The function dFdx gives the rate of change in screen coordinates in the x direction, and dFdy gives the rate of change in the y direction. Because these functions deal with screen space, they are available only in a fragment shader. These two functions can provide the information needed to compute a GRADIENT VECTOR for the position of interest.

Given a function f(x,y), the gradient of f at the position (x, y) is defined as the vector


In English, the gradient vector comprises the partial derivative of function f with respect to x (i.e., the measure of how rapidly f is changing in the x direction) and the partial derivative of the function f with respect to y (i.e., the measure of how rapidly f is changing in the y direction). The important properties of the gradient vector are that it points in the direction of the maximum rate of increase of the function f(x,y) (the gradient direction) and that the magnitude of this vector equals the maximum rate of increase of f(x,y) in the gradient direction. (These properties are useful for image processing too, as we see later.) The built-in functions dFdx and dFdy give us exactly what we need to define the gradient vector for functions used in fragment shaders.

The magnitude of the gradient vector for the function f(x,y) is commonly called the GRADIENT of the function f(x,y). It is defined as

mag[G[f(x,y)]] = sqrt((f/x)2 + (f/x)2)

In practice, it is not always necessary to perform the (possibly costly) square root operation. The gradient can be approximated with absolute values:

mag[G[f(x,y)]] abs(f(x,y) - f(x + 1, y)) + abs(f(x,y) - f(x,y + 1))

This is exactly what is returned by the built-in function fwidth. The sum of the absolute values is an upper bound on the width of the sampling filter needed to eliminate aliasing. If it is too large, the resulting image looks somewhat more blurry than it should, but this is usually acceptable.

The two methods of computing the gradient are compared in Figure 17.5. As you can see, there is little visible difference. Because the value of the gradient was quite small for the function being evaluated on this object, the values were scaled so that they would be visible.

Figure 17.5. Visualizing the gradient. In (A), the magnitude of the gradient vector is used as the intensity (gray) value. In (B), the gradient is approximated with absolute values. (Actual gradient values are scaled for visualization.) (Courtesy of Bert Freudenberg, University of Magdeburg, 2002)


To compute the actual gradient for a varying variable V within a fragment shader, we use

float width = length(vec2(dFdx(V), dFdy(V)));

To approximate it, we use the potentially higher performance calculation:

float width = fwidth(V);

We then use the filter width within our call to smoothstep as follows:

float edge   = width * 32.0;
float square = smoothstep(0.5 - edge, 0.5 + edge, triangle);

If we put this all together in a fragment shader, we get Listing 17.1.

Listing 17.1. Fragment shader for adaptive analytic antialiasing

varying float V;                    // generic varying
varying float LightIntensity;

uniform float Frequency;            // Stripe frequency = 6

void main()
{
    float sawtooth = fract(V * Frequency);
    float triangle = abs(2.0 * sawtooth - 1.0);
    float dp = length(vec2(dFdx(V), dFdy(V)));
    float edge = dp * Frequency * 2.0;
    float square = smoothstep(0.5 - edge, 0.5 + edge, triangle);
    gl_FragColor = vec4(vec3(square), 1.0);
}

If we scale the frequency of our texture, we must also increase the filter width accordingly. After the value of the function is computed, it is replicated across the red, green, and blue components of a vec3 and used as the color of the fragment. The results of this adaptive antialiasing approach are shown in Figure 17.4 (C). The results are much more consistent across the surface of the sphere. A simple lighting computation is added, and the resulting shader is applied to the teapot in Figure 17.6.

Figure 17.6. Effect of adaptive analytical antialiasing on striped teapots. On the left, the teapot is drawn with no antialiasing. On the right, the adaptive antialiasing shader is used. A small portion of the striped surface is magnified 200% to make it easier to see the difference.


This approach to antialiasing works well until the filter width gets larger than the frequency. This is the situation that occurs at the north pole of the sphere. The stripes very close to the pole are much thinner than one pixel, so no step function will produce the correct gray value here. In such regions, you need to switch to integration or frequency clamping, both of which are discussed in subsequent sections.

17.4.4. Analytic Integration

The weighted average of a function over a specified interval is called a CONVOLUTION. The values that do the weighting are called the CONVOLUTION KERNEL or the CONVOLUTION FILTER. In some cases, we can reduce or eliminate aliasing by determining the convolution of a function ahead of time and then sampling the convolved function rather than the original function. The convolution can be performed over a fixed interval in a computation that is equivalent to convolving the input function with a box filter. A box filter is far from ideal, but it is simple and easy to compute and often good enough.

This method corresponds to the notion of antialiasing by AREA SAMPLING. It is different from point sampling or supersampling in that we attempt to calculate the area of the object being rendered relative to the sampling region. Referring to Figure 17.2, if we used an area sampling technique, we would get more accurate values for each of the pixels, and we wouldn't miss that pixel that just had a sliver of coverage.

In Advanced RenderMan: Creating CGI for Motion Pictures, Apodaca and Gritz (1999) explain how to perform analytic antialiasing of a periodic step function, sometimes called a PULSE TRAIN. Darwyn Peachey described how to apply this method to his procedural brick RenderMan shader in Texturing and Modeling: A Procedural Approach, and Dave Baldwin published a GLSL version of this shader in the original paper on the OpenGL Shading Language. We use this technique to analytically antialias the procedural brick GLSL shader we described back in Chapter 6. Recall that the simple brick example used the step function to produce the periodic brick pattern. The function that creates the brick pattern in the horizontal direction is illustrated in Figure 17.7. From 0 to BrickPct.x (the brick width fraction), the function is 1.0. At the value of BrickPct.x, there is an edge with infinite slope as the function drops to 0. At the value 1, the function jumps back up to 1.0, and the process is repeated for the next brick.

Figure 17.7. The periodic step function, or pulse train, that defines the horizontal component of the procedural brick texture


The key to antialiasing this function is to compute its integral, or accumulated, value. We have to consider the possibility that, in areas of high complexity, the filter width that is computed by fwidth will cover several of these pulses. By sampling the integral rather than the function itself, we get a properly weighted average and avoid the high frequencies caused by point sampling that would produce aliasing artifacts.

So what is the integral of this function? It is illustrated in Figure 17.8. From 0 to BrickPct.x, the function value is 1, so the integral increases with a slope of 1. From BrickPct.x to 1.0, the function has a value of 0, so the integral stays constant in this region. At 1, the function jumps back to 1.0, so the integral increases until the function reaches BrickPct.x + 1. At this point, the integral changes to a slope of 0 again, and this pattern of ramps and plateaus continues.

Figure 17.8. Periodic step function (pulse train) and its integral


We perform antialiasing by determining the value of the integral over the area of the filter, and we do that by evaluating the integral at the edges of the filter and subtracting the two values. The integral for this function consists of two parts: the sum of the area for all the pulses that have been fully completed before the edge we are considering and the area of the possibly partially completed pulse for the edge we are considering.

For our procedural brick shader, we use the variable position.x as the basis for generating the pulse function in the horizontal direction. So the number of fully completed pulses is just floor(position.x). Because the height of each pulse is 1.0, the area of each fully completed pulse is just BrickPct.x. Multiplying floor(position.x) by BrickPct.x gives the area for all the fully completed pulses. The edge that we're considering may be in the part of the function that is equal to 0, or it may be in the part of the function that is equal to 1. We can find out by computing fract(position.x) (1.0 BrickPct.x). If the result of this subtraction is less than 0, we were in the part of the function that returns 0, so nothing more needs to be done. But if the value is greater than zero, we are partway into a region of the function that is equal to 1. Because the height of the pulse is 1, the area of this partial pulse is fract(position.x) (1.0 BrickPct.x). Therefore, the second part of our integral is the expression max(fract(position.x) (1.0 BrickPct.x), 0.0).

We use this integral for both the horizontal and vertical components of our procedural brick pattern. Because the application knows the brick width and height fractions (BrickPct.x and BrickPct.y), it can easily compute 1.0 BrickPct.x and 1.0 BrickPct.y and provide them to our fragment shader as well. This keeps us from unnecessarily computing these values several times for every fragment that is rendered. We call these values the mortar percentage. Because we evaluate this expression twice with different arguments, we define it as a macro or a function for convenience:

#define Integral(x, p, notp) ((floor(x)*(p)) + max(fract(x)-(notp), 0.0))

The parameter p indicates the value that is part of the pulse (i.e., when the function is 1.0), and notp indicates the value that is not part of the pulse (i.e., when the function is 0). Using this macro, we can write the code to compute the value of the integral over the width of the filter as follows:

vec2 fw, useBrick;

fw = fwidth(position);

useBrick = (Integral(position + fw, BrickPct, MortarPct) -
            Integral(position, BrickPct, MortarPct)) / fw;

The result is divided by the area of the filter (a box filter is assumed in this case) to obtain the average value for the function in the selected interval.

17.4.5. Antialiased Brick Fragment Shader

Now we can put all this to work to build better bricks. We replace the simple point sampling technique used in the example in Chapter 6 with analytic integration. The resulting shader is shown in Listing 17.2. The difference between the aliased and antialiased brick shaders is shown in Color Plate 35.

Listing 17.2. Source code for an antialiased brick fragment shader

uniform vec3  BrickColor, MortarColor;
uniform vec2  BrickSize;
uniform vec2  BrickPct;
uniform vec2  MortarPct;

varying vec2  MCposition;
varying float LightIntensity;

#define Integral(x, p, notp) ((floor(x)*(p)) + max(fract(x)-(notp), 0.0))

void main()
{
    vec2 position, fw, useBrick;
    vec3 color;

    // Determine position within the brick pattern
    position = MCposition / BrickSize;

    // Adjust every other row by an offset of half a brick
    if (fract(position.y * 0.5) > 0.5)
        position.x += 0.5;

    // Calculate filter size
    fw = fwidth(position);

    // Perform filtering by integrating the 2D pulse made by the
    // brick pattern over the filter width and height
    useBrick = (Integral(position + fw, BrickPct, MortarPct) -
                Integral(position, BrickPct, MortarPct)) / fw;

    // Determine final color
    color  = mix(MortarColor, BrickColor, useBrick.x * useBrick.y);
    color *= LightIntensity;
    gl_FragColor = vec4(color, 1.0);
}


Previous Page
Next Page



https://phonetech.ee как проверить износ аккумулятора.
JavaScript EditorAjax Editor     JavaScript Editor