ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1907.00316
51
25

Online Continuous DR-Submodular Maximization with Long-Term Budget Constraints

30 June 2019
Omid Sadeghi
Maryam Fazel
ArXiv (abs)PDFHTML
Abstract

In this paper, we study a class of online optimization problems with long-term budget constraints where the objective functions are not necessarily concave (nor convex) but they instead satisfy the Diminishing Returns (DR) property. Specifically, a sequence of monotone DR-submodular objective functions {ft(x)}t=1T\{f_t(x)\}_{t=1}^T{ft​(x)}t=1T​ and monotone linear budget functions {⟨pt,x⟩}t=1T\{\langle p_t,x \rangle \}_{t=1}^T{⟨pt​,x⟩}t=1T​ arrive over time and assuming a total targeted budget BTB_TBT​, the goal is to choose points xtx_txt​ at each time t∈{1,…,T}t\in\{1,\dots,T\}t∈{1,…,T}, without knowing ftf_tft​ and ptp_tpt​ on that step, to achieve sub-linear regret bound while the total budget violation ∑t=1T⟨pt,xt⟩−BT\sum_{t=1}^T \langle p_t,x_t \rangle -B_T∑t=1T​⟨pt​,xt​⟩−BT​ is sub-linear as well. Prior work has shown that achieving sub-linear regret is impossible if the budget functions are chosen adversarially. Therefore, we modify the notion of regret by comparing the agent against a (1−1e)(1-\frac{1}{e})(1−e1​)-approximation to the best fixed decision in hindsight which satisfies the budget constraint proportionally over any window of length WWW. We propose the Online Saddle Point Hybrid Gradient (OSPHG) algorithm to solve this class of online problems. For W=TW=TW=T, we recover the aforementioned impossibility result. However, when W=o(T)W=o(T)W=o(T), we show that it is possible to obtain sub-linear bounds for both the (1−1e)(1-\frac{1}{e})(1−e1​)-regret and the total budget violation.

View on arXiv
Comments on this paper