ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.12455
23
1

Gradient-Variation Bound for Online Convex Optimization with Constraints

22 June 2020
Shuang Qiu
Xiaohan Wei
Mladen Kolar
ArXivPDFHTML
Abstract

We study online convex optimization with constraints consisting of multiple functional constraints and a relatively simple constraint set, such as a Euclidean ball. As enforcing the constraints at each time step through projections is computationally challenging in general, we allow decisions to violate the functional constraints but aim to achieve a low regret and cumulative violation of the constraints over a horizon of TTT time steps. First-order methods achieve an O(T)\mathcal{O}(\sqrt{T})O(T​) regret and an O(1)\mathcal{O}(1)O(1) constraint violation, which is the best-known bound under the Slater's condition, but do not take into account the structural information of the problem. Furthermore, the existing algorithms and analysis are limited to Euclidean space. In this paper, we provide an \emph{instance-dependent} bound for online convex optimization with complex constraints obtained by a novel online primal-dual mirror-prox algorithm. Our instance-dependent regret is quantified by the total gradient variation V∗(T)V_*(T)V∗​(T) in the sequence of loss functions. The proposed algorithm works in \emph{general} normed spaces and simultaneously achieves an O(V∗(T))\mathcal{O}(\sqrt{V_*(T)})O(V∗​(T)​) regret and an O(1)\mathcal{O}(1)O(1) constraint violation, which is never worse than the best-known (O(T),O(1))( \mathcal{O}(\sqrt{T}), \mathcal{O}(1) )(O(T​),O(1)) result and improves over previous works that applied mirror-prox-type algorithms for this problem achieving O(T2/3)\mathcal{O}(T^{2/3})O(T2/3) regret and constraint violation. Finally, our algorithm is computationally efficient, as it only performs mirror descent steps in each iteration instead of solving a general Lagrangian minimization problem.

View on arXiv
Comments on this paper