ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.22353
32
0

Firm or Fickle? Evaluating Large Language Models Consistency in Sequential Interactions

28 March 2025
Yubo Li
Yidi Miao
Xueying Ding
Ramayya Krishnan
R. Padman
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have shown remarkable capabilities across various tasks, but their deployment in high-stake domains requires consistent performance across multiple interaction rounds. This paper introduces a comprehensive framework for evaluating and improving LLM response consistency, making three key contributions. First, we propose a novel Position-Weighted Consistency (PWC) score that captures both the importance of early-stage stability and recovery patterns in multi-turn interactions. Second, we present a carefully curated benchmark dataset spanning diverse domains and difficulty levels, specifically designed to evaluate LLM consistency under various challenging follow-up scenarios. Third, we introduce Confidence-Aware Response Generation (CARG), a framework that significantly improves response stability by incorporating model confidence signals into the generation process. Empirical results demonstrate that CARG significantly improves response stability without sacrificing accuracy, underscoring its potential for reliable LLM deployment in critical applications.

View on arXiv
@article{li2025_2503.22353,
  title={ Firm or Fickle? Evaluating Large Language Models Consistency in Sequential Interactions },
  author={ Yubo Li and Yidi Miao and Xueying Ding and Ramayya Krishnan and Rema Padman },
  journal={arXiv preprint arXiv:2503.22353},
  year={ 2025 }
}
Comments on this paper