Efficient Near-Optimal Algorithm for Online Shortest Paths in Directed Acyclic Graphs with Bandit Feedback Against Adaptive Adversaries

In this paper, we study the online shortest path problem in directed acyclic graphs (DAGs) under bandit feedback against an adaptive adversary. Given a DAG with a source node and a sink node , let denote the set of all paths from to . At each round , we select a path and receive bandit feedback on our loss , where is an adversarially chosen loss vector. Our goal is to minimize regret with respect to the best path in hindsight over rounds. We propose the first computationally efficient algorithm to achieve a near-minimax optimal regret bound of with high probability against any adaptive adversary, where hides logarithmic factors in the number of edges . Our algorithm leverages a novel loss estimator and a centroid-based decomposition in a nontrivial manner to attain this regret bound.As an application, we show that our algorithm for DAGs provides state-of-the-art efficient algorithms for -sets, extensive-form games, the Colonel Blotto game, shortest walks in directed graphs, hypercubes, and multi-task multi-armed bandits, achieving improved high-probability regret guarantees in all these settings.
View on arXiv@article{maiti2025_2504.00461, title={ Efficient Near-Optimal Algorithm for Online Shortest Paths in Directed Acyclic Graphs with Bandit Feedback Against Adaptive Adversaries }, author={ Arnab Maiti and Zhiyuan Fan and Kevin Jamieson and Lillian J. Ratliff and Gabriele Farina }, journal={arXiv preprint arXiv:2504.00461}, year={ 2025 } }