203

The Convergence Rate of AdaBoost and Friends

Abstract

The worst case convergence rate of AdaBoost, and of a family of related optimization problems including LogitBoost, is O(1/ϵ)\mathcal O(1/\epsilon), where the hidden terms depend on the loss function, the weak learning class, and the sample. In certain situations, the rate is O(ln(1/ϵ))\mathcal O(\ln(1/\epsilon)). Lastly, Schapire's example of slow convergence has rate Ω(1/ϵ)\Omega(1/\epsilon), however the hidden terms are not tight with the upper bound.

View on arXiv
Comments on this paper