42
0

Fusion-PSRO: Nash Policy Fusion for Policy Space Response Oracles

Abstract

For solving zero-sum games involving non-transitivity, a useful approach is to maintain a policy population to approximate the Nash Equilibrium (NE). Previous studies have shown that the Policy Space Response Oracles (PSRO) algorithm is an effective framework for solving such games. However, current methods initialize a new policy from scratch or inherit a single historical policy in Best Response (BR), missing the opportunity to leverage past policies to generate a better BR. In this paper, we propose Fusion-PSRO, which employs Nash Policy Fusion to initialize a new policy for BR training. Nash Policy Fusion serves as an implicit guiding policy that starts exploration on the current Meta-NE, thus providing a closer approximation to BR. Moreover, it insightfully captures a weighted moving average of past policies, dynamically adjusting these weights based on the Meta-NE in each iteration. This cumulative process further enhances the policy population. Empirical results on classic benchmarks show that Fusion-PSRO achieves lower exploitability, thereby mitigating the shortcomings of previous research on policy initialization in BR.

View on arXiv
@article{lian2025_2405.21027,
  title={ Fusion-PSRO: Nash Policy Fusion for Policy Space Response Oracles },
  author={ Jiesong Lian and Yucong Huang and Chengdong Ma and Mingzhi Wang and Ying Wen and Long Hu and Yixue Hao },
  journal={arXiv preprint arXiv:2405.21027},
  year={ 2025 }
}
Comments on this paper