JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

History of Perceptrons

Although much of the artificial intelligence (AI) community focused on symbolic reasoning, a few researchers around the mid-twentieth century were investigating parallel distributed computing, and notably models inspired by the nervous activity in biological brains. In 1943, McCulloch and Pitts started experimenting with simple simulations of the nervous system [McCulloch43].

Rosenblatt's Perceptron

It wasn't until 1959 that a neural system caught the attention of the AI community. This was Rosenblatt's perceptron [Rosenblatt59], modeling the human visual system—hence the name.

The perceptron is capable of extracting visual patterns from images (a popular problem at the time). From the bitmap, random weighted connections provide a set of features to the actual perceptron. In turn, these features are connected with weights to the output, which provide interpretations of the image. By training the perceptron on a collection of sample bitmaps with their corresponding outputs, the system could learn to classify the images (see Figure 17.1).

Figure 17.1. Rosenblatt's perceptron connected to a bitmap image, capable of recognizing some of its features.

graphics/17fig01.gif

For the AI community, one of the most interesting aspects of this perceptron was the proof of convergence established by Rosenblatt. This proved that if a possible solution for the perceptron existed, the training algorithm would find it. This generated quite a bit of interest!

Widrow and Hoff's Adaline

Widrow and Hoff took a hardware-based approach, presenting many useful improvements on the perceptron. Notably, the signals used within the neural network take the values [–1,1], rather than the common [0,1]. This allows more interesting computational properties in each of the processing units; there's always an input signal, positive or negative, so the role of the weights is more important.

However, the major contribution of the Adaline was the training rule. Based on the more robust mathematical power of the "delta rule" (explained later in the chapter), these networks can be trained to find the best possible approximation of the problem, and not just a solution. This rule is now used most commonly in neural network learning, rather than the original perceptron training algorithm.

Biological Inspiration

Most of these neural networks tend to be introduced as a biologically inspired model of the human brain. Although the initial inspiration is undoubtedly neurobiological, presenting them as a small "brain" with "neurons" raises many unfounded and often irrational assumptions. The capabilities of such techniques therefore become somewhat uncertain, turning would-be developers into dreamers.

Instead, we'll study the computer representation of perceptrons, rather than the biological neurons it's supposed to model. We'll then demonstrate the problem perceptrons try to solve, explaining how they manage to do it, and analyzing how well it works. This is the kind of scientific knowledge we would expect from AI engineers and game programmers (and is more intuitive).

      Previous Section Next Section
    



    JavaScript EditorAjax Editor     JavaScript Editor