ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.09855
26
1
v1v2v3v4v5 (latest)

Wide Boosting

20 July 2020
M. Horrell
    AI4CE
ArXiv (abs)PDFHTML
Abstract

Gradient boosting (GB) is a popular methodology used to solve prediction problems through minimization of a differentiable loss function, LLL. GB is especially performant in low and medium dimension problems. This paper presents a simple adjustment to GB motivated in part by artificial neural networks. Specifically, our adjustment inserts a square or rectangular matrix multiplication between the output of a GB model and the loss, LLL. This allows the output of a GB model to have increased dimension prior to being fed into the loss and is thus "wider" than standard GB implementations. We provide performance comparisons on several publicly available datasets. When using the same tuning methodology and same maximum boosting rounds, Wide Boosting outperforms standard GB in every dataset we try.

View on arXiv
Comments on this paper