14
0

Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey

Abstract

This survey examines evaluation methods for large language model (LLM)-based agents in multi-turn conversational settings. Using a PRISMA-inspired framework, we systematically reviewed nearly 250 scholarly sources, capturing the state of the art from various venues of publication, and establishing a solid foundation for our analysis. Our study offers a structured approach by developing two interrelated taxonomy systems: one that defines \emph{what to evaluate} and another that explains \emph{how to evaluate}. The first taxonomy identifies key components of LLM-based agents for multi-turn conversations and their evaluation dimensions, including task completion, response quality, user experience, memory and context retention, as well as planning and tool integration. These components ensure that the performance of conversational agents is assessed in a holistic and meaningful manner. The second taxonomy system focuses on the evaluation methodologies. It categorizes approaches into annotation-based evaluations, automated metrics, hybrid strategies that combine human assessments with quantitative measures, and self-judging methods utilizing LLMs. This framework not only captures traditional metrics derived from language understanding, such as BLEU and ROUGE scores, but also incorporates advanced techniques that reflect the dynamic, interactive nature of multi-turn dialogues.

View on arXiv
@article{guan2025_2503.22458,
  title={ Evaluating LLM-based Agents for Multi-Turn Conversations: A Survey },
  author={ Shengyue Guan and Haoyi Xiong and Jindong Wang and Jiang Bian and Bin Zhu and Jian-guang Lou },
  journal={arXiv preprint arXiv:2503.22458},
  year={ 2025 }
}
Comments on this paper