Machine learning algorithms take a training set, form hypotheses or models, and make predictions about the future. Because the training set is finite and the future is uncertain, learning theory usually does not yield absolute guarantees of performance of the algorithms. Instead, probabilistic bounds on the performance of machine learning algorithms are quite common. Other forms of bound include the maximal number of mind changes required to converge to a solution in the limit.
There are several difference branches of learning theory, which are often mathematically incompatible. This incompatibility arises from using different inference[?] principles: principles which tell you how to generalize from limited data.
Examples of different branches of learning theory include:
Learning theory has led to practical algorithms. For example, PAC theory inspired boosting, statistical learning theory led to support vector machines, and Bayesian inference led to belief networks[?] (by Judea Pearl[?]). See also:
Search Encyclopedia
|
Featured Article
|