201
v1v2 (latest)

AIR: Unifying Individual and Collective Exploration in Cooperative Multi-Agent Reinforcement Learning

AAAI Conference on Artificial Intelligence (AAAI), 2024
Main:7 Pages
8 Figures
Bibliography:2 Pages
2 Tables
Appendix:4 Pages
Abstract

Exploration in cooperative multi-agent reinforcement learning (MARL) remains challenging for value-based agents due to the absence of an explicit policy. Existing approaches include individual exploration based on uncertainty towards the system and collective exploration through behavioral diversity among agents. However, the introduction of additional structures often leads to reduced training efficiency and infeasible integration of these methods. In this paper, we propose Adaptive exploration via Identity Recognition~(AIR), which consists of two adversarial components: a classifier that recognizes agent identities from their trajectories, and an action selector that adaptively adjusts the mode and degree of exploration. We theoretically prove that AIR can facilitate both individual and collective exploration during training, and experiments also demonstrate the efficiency and effectiveness of AIR across various tasks.

View on arXiv
Comments on this paper