ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.14355
14
20

Parameter-free Regret in High Probability with Heavy Tails

25 October 2022
Jiujia Zhang
Ashok Cutkosky
ArXivPDFHTML
Abstract

We present new algorithms for online convex optimization over unbounded domains that obtain parameter-free regret in high-probability given access only to potentially heavy-tailed subgradient estimates. Previous work in unbounded domains considers only in-expectation results for sub-exponential subgradients. Unlike in the bounded domain case, we cannot rely on straight-forward martingale concentration due to exponentially large iterates produced by the algorithm. We develop new regularization techniques to overcome these problems. Overall, with probability at most δ\deltaδ, for all comparators u\mathbf{u}u our algorithm achieves regret O~(∥u∥T1/plog⁡(1/δ))\tilde{O}(\| \mathbf{u} \| T^{1/\mathfrak{p}} \log (1/\delta))O~(∥u∥T1/plog(1/δ)) for subgradients with bounded pth\mathfrak{p}^{th}pth moments for some p∈(1,2]\mathfrak{p} \in (1, 2]p∈(1,2].

View on arXiv
Comments on this paper