JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

A Modern Approach

Traditionally, AI is viewed as a code fragment that manipulates data. These small programs are generally known as agents. These agents are like software systems; they have layers. One central processor acquires information, processes it, deliberates a bit more, and executes some actions. Acting on behalf of the user, agents solve narrow problems with a human quality.

This view is problematic for building large and intelligent systems; the theory scales up poorly, and does not transfer from lab examples to other domains. Nouvelle AI rejects such focused AI, instead believing that true intelligence is about performance in the real world.

The 1980s witnessed a revolution based in robotics that eventually shook most of AI. The ideas, initially from Rodney Brooks (1986 and 1991), proposed using a different model for intelligence, allowing working systems to be built with more suitable methodologies [Brooks86, Brooks91].

This leads to studying embodied systems situated in realistic environments (such as robots or game characters). To solve the problems that occur in practice, new approaches to AI are needed (such as the behavior-based approach).

Brooks advocates that no central processor has to deliberate every move; instead, the system is distributed into behaviors that react instantly to their environment. Using this reactive approach, full systems are built up incrementally, by testing each set of components.

This revolution has continued since, notably influencing a group of researchers to focus on the simulation of adaptive behavior (SAB). The first conference was organized back in 1990 by the International Society for Adaptive Behavior [ISAB02].

"Every two years, the Animals to Animats Conference brings together researchers from ethology, psychology, ecology, artificial intelligence, artificial life, robotics, engineering, and related fields to further understanding of the behaviors and underlying mechanisms that allow natural and synthetic agents (animats) to adapt and survive in uncertain environments."

Animats are essentially synthetic creatures that live within a virtual environment. Because they are embodied, they interact with the world using only their body—making them fully autonomous. Animats can also adapt to their environment by using a variety of learning algorithms. But are these approaches suitable to games?

Animats in Games

Many game players, and even developers, would consider animats the "proper" way of dealing with AI NPCs. Wouldn't it be impressive to have each bot in the game as an accurately simulated creature? As far as game AI techniques are concerned, this nouvelle game AI approach is the opposite of standard techniques.

Are Animats Applicable to Games?

The major goal of game developers is believability; the accuracy of the simulation itself is not a concern. Still, animats have much to offer to computer games. By simulating the creatures accurately, fewer aspects of the behaviors need to be "faked." Because the AI is genuine, it can handle situations unforeseen by designers.

Already, similar (diluted) ideas are starting to leave their mark on the industry. Recent trends in game AI lean toward embodiment, notably in the simulation of sensory systems (Thief), the addition of noise to some actions (Quake 3), and even perceptual honesty (Black & White).

By extrapolating this progression, the result is fully embodied animats. This will certainly happen within a few years, but whether this is three or ten years away is anyone's guess. In the mean time, preliminary research in synthetic creatures shows that properties of animats, such as embodiment, actually lead to more genuine behaviors, which in turn improves believability [Isla02, Blumberg01].

As far as software engineering is concerned, the animat approach has much to offer from a design point of view. Embodiment is an elegant way of modeling the role of the AI in the game engine. The formal definitions of interfaces between the body and the brain is good practice (notably separating the AI from the logic and simulation). As for developing AI behaviors, animat and behavior-based research has revealed many ways of dealing with experimentation, such as incrementally building the AI system.

How Do We Create Animats Effectively?

How can such radical ideas be applied within game engines? Is it even feasible given time and computational constraints? As a matter of fact, it's more than feasible; different aspects of animats have already been demonstrated in popular games. This is the crucial observation; it's possible to integrate properties of animats into the standard AI design, which enables us to compromise between typical game AI approaches and the animat approach.

To date, no genuine animats have been shipped in commercial implementations, but this isn't too far in the future. Some animat prototypes have closely matched the skill level of standard game bots. In some cases, animat prototypes prove to be more reliable and realistic than game bots.

Instead of games using standard agents, animats can in fact be more efficient in many respects. The interaction of an animat with its environment is formalized so it can be optimized in the most appropriate format (for example, passing messages, function calls, shared variables). Learning techniques can minimize the processing power used to perform a particular behavior.

A Healthy Compromise

The animat approach has many benefits, regardless of policies on learning or embodiment. These advantages include improvements in the design and in the development pipeline. Naturally, genuine undiluted animats have the potential to be extremely successful within games, and the rest of this book investigates this noble goal. However, far from being on an idealistic crusade, this discussion attempts to identify places where the animat approach isn't appropriate in games, while trying to extract its advantages.

The remainder of this chapter investigates further these issues by tackling the two major characteristics of animats separately (embodiment and learning), looking into their potential benefits and pitfalls.

Embodiment

Embodiment is a different way of dealing with in-game creatures. Typically, NPCs are just agents: "smart" programs that manipulate data, like chatbots or web spiders. Such entities are purely virtual, whereas embodied agents live in a simulated world and have a synthetic body. Regardless of whether they are 2D sprites or complex 3D models, these bodies cannot do some things. Indeed, the bodies are influenced by the physical rules of the world.

Definition

An embodied agent is a living creature subject to the constraints of its environment.

Because the bodies of animats are physically constrained, the actions of their brains are limited. In general, the possible actions that can be executed by the body—and hence the AI—are restricted to the subset of actions consistent with the laws of the simulation. These actions often turn out to be physically plausible. However, embodiment generally does not limit what the AI can achieve; it just restricts how it is done.

Some characters in games represent human players who get to control the bodies. Many other characters are synthetic, similarly controlled by the computer. The AI itself can be understood as the brain, and the body offers the means for interaction with the game's physics and logic.

Consider a classical example: a standard agent can change its position itself to reach any point in space. An animat—with embodiment—needs to move itself relatively to the current position, having to actually avoid obstacles. It will not even have the capability to update its position directly. Nowadays, many games do this, effectively enforcing the simplest form of embodiment.

Actually simulating the body enables developers to add biologically plausible errors to the interaction with the environment. Errors might be present when information is perceived from the environment and in the actions. For example, animats could have difficulty perceiving the type of characters in the distance. There could even be parametric noise in the turning action, so aiming is not perfect (as with humans). Including such biologically plausible details allows the NPC to behave more realistically.

Motivation

Increasingly, agents with full access to the game data are becoming inconvenient. Having no restrictions on the reading and writing of data often results in internal chaos within the design of the engine. Because there is no formalized interface, the queries for information are left to the client (AI). Developers are actually starting to impose restrictions on these queries, notably limiting the subset of information available to the AI, such as preventing bots from seeing through walls.

For large games (such as massively multiplayer online games), it's essential to develop such hooks for the AI in the game engine. Using formal interfaces is essential because doing so allows the server to be distributed so that agents can reside on different machines if necessary. The AI can thereby be fully separated from the game logic and from the simulation of the world (physics).

So it seems formal interfaces, such as those that the AI Interface Standards Committee is attempting to define [AIISC03] will become increasingly important. Whether these can be standardized is another issue, but embodiment provides useful guidelines for drafting custom interfaces as the exchange of information between the body and the brain. Sensory data flows from the body to the brain, and actions are passed from the brain to the body.

This book anticipates the trend and uses such formal interfaces. In terms of code complexity, major improvements result from separating the acquisition of the data from its interpretation. As for efficiency, using embodiment often allows better optimizations.

Technology

With a formalized interface, the engineer can easily decide on the most appropriate format to communicate data to the AI—and do so mostly transparently using mechanisms such as messages, callbacks, abstract function calls, shared variables, and so on. Because a standard interface exists, its implementation can be particularly optimized for speed using the most appropriate mechanism.

Implementing embodiment efficiently requires a few common techniques to be used. These tricks are the major reasons why formal interfaces can actually outperform an AI implementation with direct access to the data:

  • Lazy evaluation means that no information is gathered from the world until it is actually requested by the AI. This prevents redundant computation.

  • Event-driven mechanisms mean that the AI does not need to check regularly for data. When relevant information is available, the AI is notified in an appropriate fashion.

  • Function inlining still allows the interfaces to be separated, but also optimized out by the compiler (if necessary). This is suitable for small functions, but larger ones benefit from being separate.

  • Custom optimizations can be used often to speed up the queries. By using spatial partitions of the world, only necessary information can be checked by visibility to gather the information.

  • Batching refers to collecting many queries or actions so that they can be processed later. Within the engine, the implementation can then decide the best way to deal with them to maintain memory coherence.

Used appropriately, these techniques can significantly reduce the cost of exchanging information between the AI and the engine, and make formal interfaces and embodiment a desirable property.

Learning

Learning is the second property of animats and characteristic of nouvelle game AI. Instead of the designer crafting fixed behaviors, the process is automated by adaptation and optimization techniques.

Definition

Regardless of their actions in the world, living creatures are constantly presented with a flow of sensory data. Biological animals are capable of assimilating this information and using it to adapt their behavior. There are no reasons why animats are not capable of learning; they too are presented with a stream of information from the environment, which they can interpret.

"Learning is the acquisition of new knowledge and abilities."

This definition identifies two kinds of learning: information and behavior. As far as the result is concerned, there is little difference between the two. Indeed, it's often possible to learn knowledge as a behavior; conversely, behaviors can be expressed as knowledge. So intrinsically, both these subtypes of learning can be considered identical in outcome.

In practice, a distinction exists between the two. A part of the animat does not change (phylogenetic), and another part can be adapted (ontogenetic). If the AI system itself is changed at runtime, the adaptation is called direct, and indirect otherwise [Manslow02] (Again, there's a fine line between the two.)

Motivation

Two main scenarios encourage the use of learning in computer games. Different terms are used for each of these cases—optimization and adaptation, respectively—during the development and within the game:

  • Optimization is about learning a solution to a known puzzle. This is essentially used to simplify the development process (offline) because learning might produce a better answer to the problem in less time than the manual approach.

  • Adaptation is about learning in unknown situations, and how best to deal with them. This scheme requires the AI to continuously update itself—to deal with different player styles during the game, for example (online).

Fundamentally, these scenarios may be considered as the same problem, too! Indeed, the exact same techniques can be used to perform either. However, both learning schemes are suited to different domains, implying different AI techniques are more appropriate.

The design of the AI can exploit these different types of learning, too. Optimization is often much easier to integrate into the development pipeline as a useful tool for creating believable characters. Adaptation, on the other hand, has repercussions within the game, so it requires a few more precautions in the design.

Technology

Many AI techniques can be used to perform both varieties of learning: neural networks, decision trees, genetic algorithms, reinforcement learning, classifier systems, and so forth. These different solutions are discussed throughout this book. From a conceptual point of view, there are the following four categories of algorithms:

  • Supervised learning algorithms need to be presented with examples. Apart from assimilating facts or behaviors, they can recognize patterns in the training samples. This allows the learning to generalize, and perform well on unseen examples.

  • Reinforcement learning evaluates the benefit of each action using a scalar number, instead of providing specific examples. This reward feedback is used to adapt the policy over time.

  • Evolutionary approaches provides scalar feedback for a sequence of actions, evaluating the fitness of episodes instead of giving a continuous reward.

  • Unsupervised learning does not rely on direct training. Instead, the designer provides high-level guidance, such as a performance metric.

Naturally, there are often ways to integrate these approaches—or even use one approach to solve the other (for example, self-supervision). These design issues come into consideration after the problem is identified.

Given techniques that learn (either supervised or not), the animats can be taught in different ways:

  • Teaching involves humans providing a set of examples that help the animat to behave until it's managed to understand what to do.

  • Imitation allows the animat to copy another player, who is usually human. It can thereby learn its behavior from a third-party experience.

  • Shaping sets up successive trials from which the animat can learn. After the animat learns to accomplish simple tasks, more complex ones are presented.

  • Trial and error places the animat in its environment and expects it to learn by trying out all the different approaches on its own.

Each of these methodologies can be followed during the development stage or during the actual game. Although these different approaches are presented in a practical fashion throughout this book, Chapter 35, "Designing Learning AI," specifically covers general technical and design issues.

For Skeptics

The key to successfully integrating learning within games is to use it with consideration. Some things are just not suited to learning. There will always be a need for "static" AI, even if it just acts as the glue between adaptive components.

The benefits of learning are undeniable! Learning enables the developer to save time whenever possible, and to add to the game's appeal by bringing ambitious designs to life.

However, it's debatable whether learning is capable of performing reliably within games. One of the major advantages of learning techniques is that they can be combined with other solutions. This enables the designer to modify or override the results of the learning. This book covers ways to indirectly control the learning, but directly supervise the outcome.

Finally, the fact that techniques can be applied to learning facts or behaviors, online or offline, and with so many different methodologies undoubtedly means that one flavor is suitable for every purpose.

      Previous Section Next Section
    



    JavaScript EditorAjax Editor     JavaScript Editor