Encyclopedia > Learning theory (statistics)

  Article Content

Learning theory (statistics)

In statistics, learning theory is a mathematical field related to the analysis of machine learning algorithms.

Machine learning algorithms take a training set, form hypotheses or models, and make predictions about the future. Because the training set is finite and the future is uncertain, learning theory usually does not yield absolute guarantees of performance of the algorithms. Instead, probabilistic bounds on the performance of machine learning algorithms are quite common. Other forms of bound include the maximal number of mind changes required to converge to a solution in the limit.

There are several difference branches of learning theory, which are often mathematically incompatible. This incompatibility arises from using different inference[?] principles: principles which tell you how to generalize from limited data.

Examples of different branches of learning theory include:

Learning theory has led to practical algorithms. For example, PAC theory inspired boosting, statistical learning theory led to support vector machines, and Bayesian inference led to belief networks[?] (by Judea Pearl[?]). See also:

External Links



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
Bullying

... for someone with absolute governmental power, from the Greek language turannos. In Classical Antiquity[?] it did not always have inherently negative implications, it ...

 
 
 
This page was created in 56.5 ms