ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01722
50
0

When Lower-Order Terms Dominate: Adaptive Expert Algorithms for Heavy-Tailed Losses

2 June 2025
Antoine Moulin
Emmanuel Esposito
Dirk van der Hoeven
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:2 Pages
1 Tables
Appendix:23 Pages
Abstract

We consider the problem setting of prediction with expert advice with possibly heavy-tailed losses, i.e.\ the only assumption on the losses is an upper bound on their second moments, denoted by θ\thetaθ. We develop adaptive algorithms that do not require any prior knowledge about the range or the second moment of the losses. Existing adaptive algorithms have what is typically considered a lower-order term in their regret guarantees. We show that this lower-order term, which is often the maximum of the losses, can actually dominate the regret bound in our setting. Specifically, we show that even with small constant θ\thetaθ, this lower-order term can scale as KT\sqrt{KT}KT​, where KKK is the number of experts and TTT is the time horizon. We propose adaptive algorithms with improved regret bounds that avoid the dependence on such a lower-order term and guarantee O(θTlog⁡(K))\mathcal{O}(\sqrt{\theta T\log(K)})O(θTlog(K)​) regret in the worst case, and O(θlog⁡(KT)/Δmin⁡)\mathcal{O}(\theta \log(KT)/\Delta_{\min})O(θlog(KT)/Δmin​) regret when the losses are sampled i.i.d.\ from some fixed distribution, where Δmin⁡\Delta_{\min}Δmin​ is the difference between the mean losses of the second best expert and the best expert. Additionally, when the loss function is the squared loss, our algorithm also guarantees improved regret bounds over prior results.

View on arXiv
@article{moulin2025_2506.01722,
  title={ When Lower-Order Terms Dominate: Adaptive Expert Algorithms for Heavy-Tailed Losses },
  author={ Antoine Moulin and Emmanuel Esposito and Dirk van der Hoeven },
  journal={arXiv preprint arXiv:2506.01722},
  year={ 2025 }
}
Comments on this paper