451

Solving the Diffusion of Responsibility Problem in Multiagent Reinforcement Learning with a Policy Resonance Approach

IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022
Abstract

We report a previously undiscovered problem in multiagent reinforcement learning (MARL), named Diffusion of Responsibility (DR). DR causes failures in negotiating a reliable division of responsibilities to complete sophisticated cooperative tasks. It reflects a flaw in how existing algorithms deal with the multiagent exploration-exploitation dilemma in both value-based and policy-based MARL methods. This DR problem shares similarities with a same-name phenomenon in the social psychology domain, also known as the bystander effect. In this work, we start by theoretically analyzing the cause of the DR problem, and we emphasize that the DR problem is not relevant to the reward shaping or the credit assignment problems. To deal with the DR problem, we propose a Policy Resonance method to change the multiagent exploration-exploitation strategy and promote the performance of MARL algorithms in difficult MARL tasks. This method can be equipped by most existing MARL algorithms to resolve the performance degradation caused by the DR problem. Experiments are performed in multiple test benchmark tasks, including FME, a diagnostic multiagent environment, and ADCA, a competitive multiagent game. Finally, we implement the Policy Resonance method on SOTA MARL algorithms to illustrate the effectiveness of this approach.

View on arXiv
Comments on this paper