JavaScript EditorFree JavaScript Editor     Ajax Editor 



Main Page
  Previous Section Next Section

Simulation

As mentioned, the main differences with MLPs are the extra hidden units. The intermediate layer implies that the information required to compute the output is not immediately available. Instead, the first layer must be processed before the second one. Its output then serves as the input for the next, and so forth until the final result is determined. This is a simple iterative process that goes through the entire network.

This process emphasizes the feed-forward structure of perceptrons, and especially MLP. Hidden layers do not affect this property. Listing 19.1 shows some pseudo-code to compute the output for an arbitrary number of layers.

Listing 19.1 Feed-Forward Simulation Algorithm Used to Filter the Inputs Through the MLP
# the first layer processes the input array
current = input
for layer from first to last
# compute the output of each neuron
for each i in [1..neurons] from layer
       # multiply arrays together and add up the result
          s = NetSum( neuron[i].weights, current )
          # store the post-processed result
          output[i] = Activate( s )
     end for
     # the next layer uses this layer's output as input
current = output
end for

In practice, this procedure can be used in the same fashion as plain perceptrons; we provide input patterns and collect the corresponding result. The output can be applied to approximate functions, classify patterns, or even control actuators (that is, artificial muscles). The next chapter demonstrates perceptrons in a game situation using function approximation.

      Previous Section Next Section
    



    JavaScript EditorAjax Editor     JavaScript Editor