36
0

Nucleolus Credit Assignment for Effective Coalitions in Multi-agent Reinforcement Learning

Abstract

In cooperative multi-agent reinforcement learning (MARL), agents typically form a single grand coalition based on credit assignment to tackle a composite task, often resulting in suboptimal performance. This paper proposed a nucleolus-based credit assignment grounded in cooperative game theory, enabling the autonomous partitioning of agents into multiple small coalitions that can effectively identify and complete subtasks within a larger composite task. Specifically, our designed nucleolus Q-learning could assign fair credits to each agent, and the nucleolus Q-operator provides theoretical guarantees with interpretability for both learning convergence and the stability of the formed small coalitions. Through experiments on Predator-Prey and StarCraft scenarios across varying difficulty levels, our approach demonstrated the emergence of multiple effective coalitions during MARL training, leading to faster learning and superior performance in terms of win rate and cumulative rewards especially in hard and super-hard environments, compared to four baseline methods. Our nucleolus-based credit assignment showed the promise for complex composite tasks requiring effective subteams of agents.

View on arXiv
@article{li2025_2503.00372,
  title={ Nucleolus Credit Assignment for Effective Coalitions in Multi-agent Reinforcement Learning },
  author={ Yugu Li and Zehong Cao and Jianglin Qiao and Siyi Hu },
  journal={arXiv preprint arXiv:2503.00372},
  year={ 2025 }
}
Comments on this paper