ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.03845
80
5

Meta-Inverse Reinforcement Learning for Mean Field Games via Probabilistic Context Variables

AAAI Conference on Artificial Intelligence (AAAI), 2024
4 September 2025
Yang Chen
Xiao Lin
Bo Yan
Libo Zhang
Jiamou Liu
N. Tan
Michael Witbrock
    OffRLAI4CE
ArXiv (abs)PDFHTML
Main:7 Pages
6 Figures
Bibliography:2 Pages
3 Tables
Appendix:5 Pages
Abstract

Designing suitable reward functions for numerous interacting intelligent agents is challenging in real-world applications. Inverse reinforcement learning (IRL) in mean field games (MFGs) offers a practical framework to infer reward functions from expert demonstrations. While promising, the assumption of agent homogeneity limits the capability of existing methods to handle demonstrations with heterogeneous and unknown objectives, which are common in practice. To this end, we propose a deep latent variable MFG model and an associated IRL method. Critically, our method can infer rewards from different yet structurally similar tasks without prior knowledge about underlying contexts or modifying the MFG model itself. Our experiments, conducted on simulated scenarios and a real-world spatial taxi-ride pricing problem, demonstrate the superiority of our approach over state-of-the-art IRL methods in MFGs.

View on arXiv
Comments on this paper