JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Momentum and Friction

When human players aim with their mouse, momentum generally turns the view further than they expect for large angles. Conversely, the friction also slows down the turn for small angles. Therefore, when either small or large adjustments in the view are necessary, human players can lose accuracy. The trade-off is between finding the target quickly and overshooting or taking more time to reach the target with higher precision.

For the AI, it's often a matter of specifying the exact target and firing the weapon. Because bots with perfect accuracy aren't as fun to play against, physically plausible turning errors could be added to the animats (see Figure 18.1). This increases the difficulty of the task and puts the animats on par with human players.

Figure 18.1. Two examples of errors that commonly affect the aiming process.

graphics/18fig01.gif

Explicit Model

A mathematical function is used to model these kinds of errors, expressing the actual angle in terms of the desired angle and the previous angle. This feedback loop represents momentum carried over from frame to frame, and scaling the desired angle adjustment represents the friction.

The function itself is defined over the desired angle and the previous output. We also include a parametric error in terms of the desired angle; the noise() function returns a value between –1 and 1:

Equation 18-1

graphics/18equ01.gif


  • a (alpha in Greek) is a scaling parameter used to blend the previous output with the angle request. This factor can take into account different properties that affect the accuracy: movement, angular velocity, fatigue, and so on. The designer can set the parameter to be within the range [0.3,0.5].

  • b (beta) is effectively initialized randomly within [0.9,1.1]. It can be seen as a parametric error factor for the angle request. It's multiplied to the angle because we don't want small errors when no movement is requested. (That is, the error is proportional to the angle.)

  • The noise() function is defined arbitrarily as long as it returns a value within [–1,1]. For example, the function cos(angle2 * 0.217 + 342 / angle) oscillates almost unpredictably within that range.

This makes the aiming more realistic, because the AI will also be subject to under- and overshooting and aiming errors.

Linear Approximation

A perceptron is used to approximate the function illustrated in Equation 18.1. This can be understood as the animat learning to turn in a smoother fashion. Alternatively, it can be seen as a way of learning a faster approximation of an expensive function in the interface's implementation (inaccessible from the animat). In fact, after the function is learned, it could be moved to the interface—hidden from the AI code. The aiming errors would then become a constraint.

The equation is well-suited to being approximated linearly. In fact, it becomes a moving average; the previous angle requests and the current request are weighted together. The major task of the learning will be to understand the a parameter, approximate b, and adjust the weights accordingly (see Figure 18.2).

Figure 18.2. Flow chart representation of the processing of the angles before they are passed to the engine.

graphics/18fig02.gif

To be able to approximate the equation, we need the previous output to be explicit. We'll have to plug it into the input ourselves so it seems like a direct mapping for the perceptron (reactive).

Methodology

We'll compute the approximation by training the network iteratively. Random inputs are chosen, and then the desired output is computed with Equation 18.1. By grouping these results all together, a batch training algorithm can be applied to find the desired values for each of the weights and biases. This can be done during the initialization or, better still, offline.

The perceptron itself will be modular: A large perceptron does not compute all the angles together (that is, yaw and pitch); instead, a smaller perceptron is applied twice (for both angles). This reduces the memory used to store the perceptron, but requires slightly more programming.

Evaluation

This is not an especially difficult problem. Even if it is not linear by nature, the linear approximation is close. The behaviors generated by the perceptron are more realistic, because the turning is much smoother (visibly subject to friction and momentum).

When the perceptron learns a function with a lot momentum, there's a certain feeling of motion sickness. That said, beginners often have this problem, too, and the problem can be just as unstable.

As for the training, this is done very quickly. Few samples (for instance, 10) are required for the weights to be adjusted suitably. The iterations required vary greatly depending on the initial random weights.

Practical Demo

The example animat demonstrating this problem is known as Missy, which can be found online at http://AiGameDev.com/ in source and binary formats among the other demos. See the attached guide to get the animat running in the game. A perceptron is used to approximate the aiming error function, learned using a batch algorithm during the initialization.


      Previous Section Next Section
    



    JavaScript EditorAjax Editor     JavaScript Editor