49
0
v1v2 (latest)

Multi-agent Markov Entanglement

Main:26 Pages
Bibliography:3 Pages
Appendix:19 Pages
Abstract

Value decomposition has long been a fundamental technique in multi-agent dynamic programming and reinforcement learning (RL). Specifically, the value function of a global state (s1,s2,,sN)(s_1,s_2,\ldots,s_N) is often approximated as the sum of local functions: V(s1,s2,,sN)i=1NVi(si)V(s_1,s_2,\ldots,s_N)\approx\sum_{i=1}^N V_i(s_i). This approach traces back to the index policy in restless multi-armed bandit problems and has found various applications in modern RL systems. However, the theoretical justification for why this decomposition works so effectively remains underexplored.In this paper, we uncover the underlying mathematical structure that enables value decomposition. We demonstrate that a multi-agent Markov decision process (MDP) permits value decomposition if and only if its transition matrix is not "entangled" -- a concept analogous to quantum entanglement in quantum physics. Drawing inspiration from how physicists measure quantum entanglement, we introduce how to measure the "Markov entanglement" for multi-agent MDPs and show that this measure can be used to bound the decomposition error in general multi-agent MDPs.Using the concept of Markov entanglement, we proved that a widely-used class of index policies is weakly entangled and enjoys a sublinear O(N)\mathcal O(\sqrt{N}) scale of decomposition error for NN-agent systems. Finally, we show how Markov entanglement can be efficiently estimated in practice, providing practitioners with an empirical proxy for the quality of value decomposition.

View on arXiv
@article{chen2025_2506.02385,
  title={ Multi-agent Markov Entanglement },
  author={ Shuze Chen and Tianyi Peng },
  journal={arXiv preprint arXiv:2506.02385},
  year={ 2025 }
}
Comments on this paper