This paper explores the emergence of language in multi-agent reinforcement learning (MARL) using transformers. Existing methods such as RIAL, DIAL, and CommNet enable agent communication but lack interpretability. We propose Differentiable Inter-Agent Transformers (DIAT), which leverage self-attention to learn symbolic, human-understandable communication protocols. Through experiments, DIAT demonstrates the ability to encode observations into interpretable vocabularies and meaningful embeddings, effectively solving cooperative tasks. These results highlight the potential of DIAT for interpretable communication in complex multi-agent environments.
View on arXiv@article{bhardwaj2025_2505.02215, title={ Interpretable Emergent Language Using Inter-Agent Transformers }, author={ Mannan Bhardwaj }, journal={arXiv preprint arXiv:2505.02215}, year={ 2025 } }