39
0

Revisiting Multi-Agent Asynchronous Online Optimization with Delays: the Strongly Convex Case

Abstract

We revisit multi-agent asynchronous online optimization with delays, where only one of the agents becomes active for making the decision at each round, and the corresponding feedback is received by all the agents after unknown delays. Although previous studies have established an O(dT)O(\sqrt{dT}) regret bound for this problem, they assume that the maximum delay dd is knowable or the arrival order of feedback satisfies a special property, which may not hold in practice. In this paper, we surprisingly find that when the loss functions are strongly convex, these assumptions can be eliminated, and the existing regret bound can be significantly improved to O(dlogT)O(d\log T) meanwhile. Specifically, to exploit the strong convexity of functions, we first propose a delayed variant of the classical follow-the-leader algorithm, namely FTDL, which is very simple but requires the full information of functions as feedback. Moreover, to handle the more general case with only the gradient feedback, we develop an approximate variant of FTDL by combining it with surrogate loss functions. Experimental results show that the approximate FTDL outperforms the existing algorithm in the strongly convex case.

View on arXiv
@article{bao2025_2503.10013,
  title={ Revisiting Multi-Agent Asynchronous Online Optimization with Delays: the Strongly Convex Case },
  author={ Lingchan Bao and Tong Wei and Yuanyu Wan },
  journal={arXiv preprint arXiv:2503.10013},
  year={ 2025 }
}
Comments on this paper