JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Chapter 17. Perceptrons

Key Topics

Perceptrons are one of the simplest kinds of artificial neural networks. They consist of parallel processing units—inspired by biological neurons. The units are linked together with weighted connections. The term perceptron generally refers to networks with a single layer of processing units. Therefore, their computational complexity is relatively low; perceptrons solve linear problems or approximate complex solutions linearly.

Applying the perceptron to a problem requires a simulation algorithm, filtering the input data through this network to produce an output. In games, the output can be used as a prediction of a situation (for example, win or loss), classifying patterns (for instance, identifying friends or enemies), and controlling the body of animats (for instance, turning left or right).

This chapter covers the following topics:

  • The history of connectionism: the field of artificial models of brains

  • The representation used for perceptrons, including how the network is formed between inputs and outputs

  • The simulation algorithm used to process the information and obtain a result

  • An overview of optimization techniques (using mathematics to find a better representation) from a general perspective

  • How to adjust the weights of a perceptron as an optimization problem, so the neural network can be changed to produce the right output

  • Perceptron training algorithms that allow a system to learn from multiple examples of correct inputs/outputs

  • The role of a perceptron in graphical terms, to provide a more intuitive understanding

The next chapter applies perceptrons to multiple problems in a much more practical fashion. The next few pages cover the theory, using simple game examples when appropriate.

      Previous Section Next Section
    
    R7


    JavaScript EditorAjax Editor     JavaScript Editor