ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02486
88
0

Catoni Contextual Bandits are Robust to Heavy-tailed Rewards

4 February 2025
Chenlu Ye
Yujia Jin
Alekh Agarwal
Tong Zhang
ArXivPDFHTML
Abstract

Typical contextual bandit algorithms assume that the rewards at each round lie in some fixed range [0,R][0, R][0,R], and their regret scales polynomially with this reward range RRR. However, many practical scenarios naturally involve heavy-tailed rewards or rewards where the worst-case range can be substantially larger than the variance. In this paper, we develop an algorithmic approach building on Catoni's estimator from robust statistics, and apply it to contextual bandits with general function approximation. When the variance of the reward at each round is known, we use a variance-weighted regression approach and establish a regret bound that depends only on the cumulative reward variance and logarithmically on the reward range RRR as well as the number of rounds TTT. For the unknown-variance case, we further propose a careful peeling-based algorithm and remove the need for cumbersome variance estimation. With additional dependence on the fourth moment, our algorithm also enjoys a variance-based bound with logarithmic reward-range dependence. Moreover, we demonstrate the optimality of the leading-order term in our regret bound through a matching lower bound.

View on arXiv
@article{ye2025_2502.02486,
  title={ Catoni Contextual Bandits are Robust to Heavy-tailed Rewards },
  author={ Chenlu Ye and Yujia Jin and Alekh Agarwal and Tong Zhang },
  journal={arXiv preprint arXiv:2502.02486},
  year={ 2025 }
}
Comments on this paper