JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Chapter 26. Classification and Regression Trees

Key Topics

Regression and classification trees—also known collectively as decision trees (DTs)—are data structures that can interpret patterns in data in order to recognize them. They are organized as hierarchical decisions over certain variables to predict a result value. With classification trees, the result is a symbolic class, whereas regression trees return continuous values.

Decision trees must be learned from example data, so it's necessary to create or gather it beforehand. An expert can prepare the data, or a collection of facts from the problems can be accumulated. Many tasks, including simulating intelligence, can be seen as just interpreting—or classifying—such data. In practice, each problem can be encoded as a set of attributes and the DT can predict an unknown attribute (the solution).

This chapter discusses the following topics:

  • The concept of a data sample, with predictor and response variables

  • The representation of decision trees, with decision nodes and branches

  • The process of classifying and regressing based on an existing tree

  • How trees are learned—or induced—from data sets, using recursive partitioning

  • The training procedure from a wider perspective, aiming to find the best DT for the problem

These items can become the decision-making component in game characters. Each situation is represented as a set of attributes, so the DT can suggest the best course of action in a certain situation. Also, it's possible to use regression trees to evaluate the benefit of an object (or predict an outcome, both positive and negative). The next chapter does this in the context of weapon selection.

      Previous Section Next Section
    
    R7


    JavaScript EditorAjax Editor     JavaScript Editor