ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.12320
11
5

Dynamic Regret Analysis of Safe Distributed Online Optimization for Convex and Non-convex Problems

23 February 2023
Ting-Jui Chang
Sapana Chaudhary
D. Kalathil
Shahin Shahrampour
ArXivPDFHTML
Abstract

This paper addresses safe distributed online optimization over an unknown set of linear safety constraints. A network of agents aims at jointly minimizing a global, time-varying function, which is only partially observable to each individual agent. Therefore, agents must engage in local communications to generate a safe sequence of actions competitive with the best minimizer sequence in hindsight, and the gap between the two sequences is quantified via dynamic regret. We propose distributed safe online gradient descent (D-Safe-OGD) with an exploration phase, where all agents estimate the constraint parameters collaboratively to build estimated feasible sets, ensuring the action selection safety during the optimization phase. We prove that for convex functions, D-Safe-OGD achieves a dynamic regret bound of O(T2/3log⁡T+T1/3CT∗)O(T^{2/3} \sqrt{\log T} + T^{1/3}C_T^*)O(T2/3logT​+T1/3CT∗​), where CT∗C_T^*CT∗​ denotes the path-length of the best minimizer sequence. We further prove a dynamic regret bound of O(T2/3log⁡T+T2/3CT∗)O(T^{2/3} \sqrt{\log T} + T^{2/3}C_T^*)O(T2/3logT​+T2/3CT∗​) for certain non-convex problems, which establishes the first dynamic regret bound for a safe distributed algorithm in the non-convex setting.

View on arXiv
Comments on this paper