ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.15330
25
0

Med-CoDE: Medical Critique based Disagreement Evaluation Framework

21 April 2025
Mohit Gupta
Akiko Aizawa
R. Shah
    LM&MA
    ELM
ArXivPDFHTML
Abstract

The emergence of large language models (LLMs) has significantly influenced numerous fields, including healthcare, by enhancing the capabilities of automated systems to process and generate human-like text. However, despite their advancements, the reliability and accuracy of LLMs in medical contexts remain critical concerns. Current evaluation methods often lack robustness and fail to provide a comprehensive assessment of LLM performance, leading to potential risks in clinical settings. In this work, we propose Med-CoDE, a specifically designed evaluation framework for medical LLMs to address these challenges. The framework leverages a critique-based approach to quantitatively measure the degree of disagreement between model-generated responses and established medical ground truths. This framework captures both accuracy and reliability in medical settings. The proposed evaluation framework aims to fill the existing gap in LLM assessment by offering a systematic method to evaluate the quality and trustworthiness of medical LLMs. Through extensive experiments and case studies, we illustrate the practicality of our framework in providing a comprehensive and reliable evaluation of medical LLMs.

View on arXiv
@article{gupta2025_2504.15330,
  title={ Med-CoDE: Medical Critique based Disagreement Evaluation Framework },
  author={ Mohit Gupta and Akiko Aizawa and Rajiv Ratn Shah },
  journal={arXiv preprint arXiv:2504.15330},
  year={ 2025 }
}
Comments on this paper