16.7. Particle SystemsA new type of rendering primitive was invented by Bill Reeves and his colleagues at Lucasfilm in the early 1980s as they struggled to come up with a way to animate the fire sequence called "The Genesis Demo" in the motion picture Star Trek II: The Wrath of Khan. Traditional rendering methods were more suitable for rendering smooth, well-defined surfaces. What Reeves was after was a way to render a class of objects he called "fuzzy"things like fire, smoke, liquid spray, comet tails, fireworks, and other natural phenomena. These things are fuzzy because none of them have a well-defined boundary and the components typically change over time. The technique that Reeves invented to solve this problem was described in the 1983 paper, Particle SystemsA Technique for Modeling a Class of Fuzzy Objects. PARTICLE SYSTEMS had been used in rendering before, but Reeves realized that he could get the particles to behave the way he wanted them to by giving each particle its own set of initial conditions and by establishing a set of probabilistic rules that governed how particles would change over time. There are three main differences between particle systems and traditional surface-based rendering techniques. First, rather than an object being defined with polygons or curved surfaces, it is represented by a cloud of primitive particles that define its volume. Second, the object is considered dynamic rather than static. The constituent particles come into existence, evolve, and then die. During their lifetime, they can change position and form. Finally, objects defined in this manner are not completely specified. A set of initial conditions are specified, along with rules for birth, death, and evolution. Stochastic processes are used to influence all three stages, so the shape and appearance of the object is nondeterministic. Some assumptions are usually made to simplify the rendering of particle systems, among them,
Particle attributes often include position, color, transparency, velocity, size, shape, and lifetime. For rendering a particle system, each particle's attributes are used along with certain global parameters to update its position and appearance at each frame. Each particle's position might be updated on the basis of the initial velocity vector and the effects from gravity, wind, friction, and other global factors. Each particle's color (including transparency), size, and shape can be modified as a function of global time, the age of the particle, its height, its speed, or any other parameter that can be calculated. What are the benefits of using particle systems as a rendering technique? For one thing, complex systems can be created with little human effort. For another, the complexity can easily be adjusted. And as Reeves says in his 1983 paper, "The most important thing about particle systems is that they move: good dynamics are quite often the key to making things look real." 16.7.1. Application SetupFor this shader, my goal was to produce a shader that acted like a "confetti cannon"something that spews out a large quantity of small, brightly colored pieces of paper. They don't come out all at once, but they come out in a steady stream until none are left. Initial velocities are somewhat random, but there is a general direction that points up and away from the origin. Gravity influences these particles and eventually brings them back to earth. The code in Listing 16.4 shows the C subroutine that I used to create the initial values for my particle system. To accomplish the look I was after, I decided that for each particle I needed its initial position, a randomly generated color, a randomly generated initial velocity (with some constraints), and a randomly generated start time. The subroutine createPoints lets you create an arbitrary-sized, two-dimensional grid of points for the particle system. There's no reason for a two-dimensional grid, but I was interested in seeing the effect of particles "popping off the grid" like pieces of popcorn. It would be even easier to define the particle system as a 1D array, and all of the vertex positions could have exactly the same initial value (for instance (0,0,0)). But I set it up as a 2D array, and so you can pass in a width and height to define the number of particles to be created. After the memory for the arrays is allocated, a nested loop computes the values for each of the particle attributes at each grid location. Each vertex position has a y-coordinate value of 0, and the x and z coordinates vary across the grid. Each color component is assigned a random number in the range [0.5,1.0] so that mostly bright pastel colors are used. The velocity vectors are assigned random numbers to which I gave a strong upward bias by multiplying the y coordinate by 10. The general direction of the particles is aimed away from the origin by the addition of 3 to both the x- and the z- coordinates. Finally, each particle is given a start-time value in the range [0,10]. Listing 16.4. C subroutine to create vertex data for particles
OpenGL has built-in attributes for vertex position, which we use to pass the initial particle position, and for color, which we use to pass the particle's color. We need to use generic vertex attributes to specify the particle's initial velocity and start time. Let's pick indices 3 and 4 and define the necessary constants: #define VELOCITY_ARRAY 3 #define START_TIME_ARRAY 4 After we have created a program object, we can bind a generic vertex attribute index to a vertex shader attribute variable name. (We can do this even before the vertex shader is attached to the program object.) These bindings are checked and go into effect at the time glLinkProgram is called. To bind the generic vertex attribute index to a vertex shader variable name, we do the following: glBindAttribLocation(ProgramObject, VELOCITY_ARRAY, "Velocity"); glBindAttribLocation(ProgramObject, START_TIME_ARRAY, "StartTime"); After the shaders are compiled, attached to the program object, and linked, we're ready to draw the particle system. All we need to do is call the drawPoints function shown in Listing 16.5. In this function, we set the point size to 2 to render somewhat larger points. The next four lines of code set up pointers to the vertex arrays that we're using. In this case, we have four: one for vertex positions (i.e., initial particle position), one for particle color, one for initial velocity, and one for the particle's start time (i.e., birth). After that, we enable the arrays for drawing by making calls to glEnableClientState for the standard vertex attributes and glEnableVertexAttribArray for the generic vertex attributes. Next we call glDrawArrays to render the points, and finally, we clean up by disabling each of the enabled vertex arrays. Listing 16.5. C subroutine to draw particles as points
To achieve the animation effect, the application must communicate its notion of time to the vertex shader, as shown in Listing 16.6. Here, the variable ParticleTime is incremented once each frame and loaded into the uniform variable Time. This allows the vertex shader to perform computations that vary (animate) over time. Listing 16.6. C code snippet to update the time variable each frame
16.7.2. Confetti Cannon Vertex ShaderThe vertex shader (see Listing 16.7) is the key to this example of particle system rendering. Instead of simply transforming the incoming vertex, we use it as the initial position to compute a new position based on a computation involving the uniform variable Time. It is this newly computed position that is actually transformed and rendered. This vertex shader defines the attribute variables Velocity and StartTime. In the previous section, we saw how generic vertex attribute arrays were defined and bound to these vertex shader attribute variables. As a result of this, each vertex has an updated value for the attribute variables Velocity and StartTime, as well as for the standard vertex attributes specified by gl_Vertex and gl_Color. The vertex shader starts by computing the age of the particle. If this value is less than zero, the particle has not yet been born. In this case, the particle is just assigned the color provided through the uniform variable Background. (If you actually want to see the grid of yet-to-be-born particles, you could provide a color value other than the background color. And if you want to be a bit more clever, you could pass the value t as a varying variable to the fragment shader and let it discard fragments for which t is less than zero. For our purposes, this wasn't necessary.) If a particle's start time is less than the current time, the following kinematic equation is used to determine its current position:
In this equation Pi represents the initial position of the particle, v represents the initial velocity, t represents the elapsed time, a represents the acceleration, and P represents the final computed position. For acceleration, we use the value of acceleration due to gravity on Earth, which is 9.8 meters per second2. In our simplistic model, we assume that gravity affects only the particle's height (y coordinate) and that the acceleration is negative (i.e., the particle is slowing down and falling back to the ground). The coefficient for the t2 term in the preceding equation therefore appears in our code as the constant 4.9, and it is applied only to vert.y. After this, all that remains is to transform the computed vertex and store the result in gl_Position. Listing 16.7. Confetti cannon (particle system) vertex shader
The value computed by the vertex shader is simply passed through the fragment shader to become the final color of the fragment to be rendered. Some frames from the confetti cannon animation sequence are shown in Figure 16.1. Figure 16.1. Several frames from the animated sequence produced by the particle system shader. In this animation, the particle system contains 10,000 points with randomly assigned initial velocities and start times. The position of the particle at each frame is computed entirely in the vertex shader according to a formula that simulates the effects of gravity. (3Dlabs, Inc.)
16.7.3. Further EnhancementsThere's a lot that you can do to make this shader more interesting. You might pass the t value from the vertex shader to the fragment shader as suggested earlier and make the color of the particle change over time. For instance, you could make the color change from yellow to red to black to simulate an explosion. You could reduce the alpha value over time to make the particle fade out. You might also provide a "time of death" and extinguish the particle completely at a certain time or when a certain distance from the origin is reached. Instead of drawing the particles as points, you might draw them as short lines so that you could blur the motion of each particle. You could also vary the size of the point (or line) over time to create particles that grow or shrink. You can make the physics model a lot more sophisticated than the one illustrated. To make the particles look better, you could render them as point sprites, another new feature in OpenGL 2.0. (A point sprite is a point that is rendered as a textured quadrilateral that always faces the viewer.) The real beauty in doing particle systems within a shader is that the computation is done completely in graphics hardware rather than on the host CPU. If the particle system data is stored in a vertex buffer object, there's a good chance that it will be stored in the on-board memory of the graphics hardware, so you won't even be using up any I/O bus bandwidth as you render the particle system each frame. With the OpenGL Shading Language, the equation for updating each particle can be arbitrarily complex. And, because the particle system is rendered like any other 3D object, you can rotate it around and view it from any angle while it is animating. There's really no end to the effects (and the fun!) that you can have with particle systems. |