ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11311
12
0

Explaining Strategic Decisions in Multi-Agent Reinforcement Learning for Aerial Combat Tactics

16 May 2025
Ardian Selmonaj
Alessandro Antonucci
Adrian Schneider
Michael Rüegsegger
Matthias Sommer
ArXivPDFHTML
Abstract

Artificial intelligence (AI) is reshaping strategic planning, with Multi-Agent Reinforcement Learning (MARL) enabling coordination among autonomous agents in complex scenarios. However, its practical deployment in sensitive military contexts is constrained by the lack of explainability, which is an essential factor for trust, safety, and alignment with human strategies. This work reviews and assesses current advances in explainability methods for MARL with a focus on simulated air combat scenarios. We proceed by adapting various explainability techniques to different aerial combat scenarios to gain explanatory insights about the model behavior. By linking AI-generated tactics with human-understandable reasoning, we emphasize the need for transparency to ensure reliable deployment and meaningful human-machine interaction. By illuminating the crucial importance of explainability in advancing MARL for operational defense, our work supports not only strategic planning but also the training of military personnel with insightful and comprehensible analyses.

View on arXiv
@article{selmonaj2025_2505.11311,
  title={ Explaining Strategic Decisions in Multi-Agent Reinforcement Learning for Aerial Combat Tactics },
  author={ Ardian Selmonaj and Alessandro Antonucci and Adrian Schneider and Michael Rüegsegger and Matthias Sommer },
  journal={arXiv preprint arXiv:2505.11311},
  year={ 2025 }
}
Comments on this paper