Encyclopedia > Learning theory (statistics)

  Article Content

Learning theory (statistics)

In statistics, learning theory is a mathematical field related to the analysis of machine learning algorithms.

Machine learning algorithms take a training set, form hypotheses or models, and make predictions about the future. Because the training set is finite and the future is uncertain, learning theory usually does not yield absolute guarantees of performance of the algorithms. Instead, probabilistic bounds on the performance of machine learning algorithms are quite common. Other forms of bound include the maximal number of mind changes required to converge to a solution in the limit.

There are several difference branches of learning theory, which are often mathematically incompatible. This incompatibility arises from using different inference[?] principles: principles which tell you how to generalize from limited data.

Examples of different branches of learning theory include:

Learning theory has led to practical algorithms. For example, PAC theory inspired boosting, statistical learning theory led to support vector machines, and Bayesian inference led to belief networks[?] (by Judea Pearl[?]). See also:

External Links



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Canadian Music Hall of Fame

... 1980 Paul Anka 1981 Joni Mitchell 1982 Neil Young 1983 Glenn Gould 1986 Gordon Lightfoot 1987 The Guess Who[?] 1989 The Band 1990 Maureen Forrester[?] ...

 
 
 
This page was created in 21.8 ms