Black Box UnderstandingConsidering the problem from an external point of view, it can be understood as a black box in two ways: as a behavior trying to accomplish a task, or as a software component processing information. Informal KnowledgeInformally, the problem can be understood in terms of the task it must accomplish. We can consider the task as a behavior interacting with the environment (shooting), requiring information (enemy position and terrain), and producing an outcome (targeted projectile). See Figure 21.1. Figure 21.1. Graphical representation of the task; the information is used to produce an outcome. The problem involves mapping inputs to outputs.
Such knowledge of the task is refined in the understanding phase of the AI development (see Chapter 7, "Analysis and Understanding"). You can further this informal black box knowledge by establishing the correspondence between the situation and the outcome—generally with a case study. In addition, the criteria used to evaluate the behaviors provide an informal indication of the task's complexity. (The description of wall following is longer than obstacle avoidance, for instance.) Software SpecificationInformal knowledge of the task (high level) is often combined with a more formal approach to specify the problem as a software component (low level). The specification phase aims to describe the interfaces with other components (and the platform) by formalizing the data representation (see Chapter 9, "Specifications and Knowledge Representation"). Initial understanding of the problem as a software component is often refined during the application phase with observations made from prototypes. For example, providing the target selection with knowledge of the terrain as well as the enemy is a refinement. A stable formal interface enables developers to analyze the problem theoretically. Theoretical AnalysisThe benefit of the specification is that we can rationalize over the inputs and outputs (the variables of the problem). Indeed, from the representation of the variables, we can deduce the theoretical complexity of the problem—which affects the capabilities of the AI and our ability to design a solution (see Figure 21.2). Figure 21.2. Example problem with six variables, each with two to five values. Both inputs and outputs are treated identically.
The size of the problem essentially depends on the number of input and output configurations. As a reminder, each possible parameter combination forms one configuration. The set of all input configurations is known as the domain, and the output is the codomain. The magnitude of these domains can be measured by two factors:
The total number of configurations for the problem is calculated by multiplying the size of each dimension (see Figure 21.3). As more parameters are introduced to the problem, the total number of configurations grows exponentially, because we take the product of these values. This is known as the curse of dimensionality. Figure 21.3. The set of configurations representing the inputs (left), the problem (middle), and the output (right). The size of the problem is the product of the input and output sizes.
Table 21.1 shows an example domain size for a simple problem: deciding whether to attack based on the identity of the enemy, his distance, and the health. This simple problem has total number of 1,656,400 configurations. Adding another parameter, such as enemy health, would increase this size to 167,296,400!
The theoretical analysis reveals the worst-case scenario. In practice, however, problems are often simpler. |