The Convergence Rate of AdaBoost and Friends
Abstract
The worst case convergence rate of AdaBoost, and of a family of related optimization problems including LogitBoost, is , where the hidden terms depend on the loss function, the weak learning class, and the sample. In certain situations, the rate is . Lastly, Schapire's example of slow convergence has rate , however the hidden terms are not tight with the upper bound.
View on arXivComments on this paper
