Evaluation
Even with all the precautions, the example data collected is indeed very noisy. In most situations, an average error is as high as 25 percent (the best being 17 percent). This takes the batch algorithms hundreds of training periods (epoch), depending on the random weights. Experiments show there is no miracle number of hidden weights; low values, such as 4/5, seem to work as well as others. Still, the perceptrons manage to learn some trends, notably using the distances.
Despite the relatively low precision of the learning, the perceptron still proves a valuable help in optimizing the target selection. The neural network visibly increases the effectiveness of the animat's shooting capabilities. The perceptron tends to favor spots that are close to the estimate, close to the enemy's current position, and closer to the origin of the rocket. Targets near the enemy on the floor are often chosen as targets (good compromises). Generally, the perceptron is also very good at finding flaws in suggestions made by the target-generation mechanism, which prevents the animat from blowing itself up! Manually enforcing a minimal distance to engage an enemy prevents such kamikaze shots.
Note that a suitable default generation mechanism could be developed relatively easily without learning technology. For example, a suitable policy is to generate a random point near the estimate and drop it to the floor to find a target. However, the perceptron learns to find trends in a complex problem (which increases performance generally), and manages to optimize target selection according to individual skills.
Combining this generation mechanism with a perceptron allows the animats to appear realistic while learning. This design means that the perceptron only improves the target selection according to experience; there's no risk of learning "stupid" behaviors.
|