ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.10831
72
1
v1v2 (latest)

Boosting as Frank-Wolfe

22 September 2022
Ryotaro Mitsuboshi
Kohei Hatano
Eiji Takimoto
ArXiv (abs)PDFHTML
Abstract

Some boosting algorithms, such as LPBoost, ERLPBoost, and C-ERLPBoost, aim to solve the soft margin optimization problem with the ℓ1\ell_1ℓ1​-norm regularization. LPBoost rapidly converges to an ϵ\epsilonϵ-approximate solution in practice, but it is known to take Ω(m)\Omega(m)Ω(m) iterations in the worst case, where mmm is the sample size. On the other hand, ERLPBoost and C-ERLPBoost are guaranteed to converge to an ϵ\epsilonϵ-approximate solution in O(1ϵ2ln⁡mν)O(\frac{1}{\epsilon^2} \ln \frac{m}{\nu})O(ϵ21​lnνm​) iterations. However, the computation per iteration is very high compared to LPBoost. To address this issue, we propose a generic boosting scheme that combines the Frank-Wolfe algorithm and any secondary algorithm and switches one to the other iteratively. We show that the scheme retains the same convergence guarantee as ERLPBoost and C-ERLPBoost. One can incorporate any secondary algorithm to improve in practice. This scheme comes from a unified view of boosting algorithms for soft margin optimization. More specifically, we show that LPBoost, ERLPBoost, and C-ERLPBoost are instances of the Frank-Wolfe algorithm. In experiments on real datasets, one of the instances of our scheme exploits the better updates of the secondary algorithm and performs comparably with LPBoost.

View on arXiv
Comments on this paper