Encyclopedia > Backpropagation

  Article Content

Backpropagation

Backpropagation is a technique used for training neural networks. It is useful only for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop).

Backpropagation also requires that the transfer function used for the neurons be differentiable.

The gist of the technique is as follows -

  1. Present a training sample to the neural network.
  2. Compare the NN's output to the required output from that sample pair. Calculate the error in each output neuron.
  3. For each neuron, calculate from the error, the actual output, and a scaling factor, how much lower or higher it should be. This is the local error.
  4. Using the neurons weights on it's incoming connections, assign "blame" for the local error to neurons at the previous level.
  5. Repeat the steps above on the neurons at the previous level, using each ones "blame" as their error.

As the algorithm's name implies, the errors (and therefore the learning) propagate backwards from the output nodes to the inner nodes.

Backpropagation usually allows quick convergence on local minima in the kind of networks to which it is suited.



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Islandia, New York

... family size is 3.49. In the village the population is spread out with 24.7% under the age of 18, 6.9% from 18 to 24, 36.3% from 25 to 44, 25.1% from 45 to 64, and 7.0% who ...

 
 
 
This page was created in 37.4 ms