Encyclopedia > Boosting

  Article Content

Boosting

Boosting is a machine learning technique for perform supervised learning. Boosting occurs in stages, by incrementally adding to the current learned function. At every stage, a weak learner (i.e., one that can have an accuracy as bad as slightly greater than chance) is trained with the data. The output of the weak learner is then added to the learned function, with some strength (proportional to how accurate the weak learner is). Then, the data is reweighted: examples that the current learned function get wrong are "boosted" in importance, so that future weak learners will attempt to fix the errors.

There are several different boosting algorithms, depending on the exact mathematical form of the strength and weight. One of the most common boosting algorithms is AdaBoost[?]. Most boosting algorithms fit into the AnyBoost[?] framework, which shows that boosting performs gradient descent in function space[?].

Boosting is based on probably approximately correct learning[?], which is a branch of learning theory.

Algorithmically, boosting is related to

External Links



All Wikipedia text is available under the terms of the GNU Free Documentation License

 
  Search Encyclopedia

Search over one million articles, find something about almost anything!
 
 
  
  Featured Article
242

... Contents 242 Centuries: 2nd century - 3rd century - 4th century Decades: 190s 200s 210s 220s 230s - 240s - 250s 260s 270s 280s 290s Years: 237 238 239 240 241 - ...

 
 
 
This page was created in 26.7 ms