ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.06007
42
30

CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models

10 June 2024
Peng Xia
Ze Chen
Juanxi Tian
Yangrui Gong
Ruibo Hou
Yue Xu
Zhenbang Wu
Zhiyuan Fan
Yiyang Zhou
Kangyu Zhu
Wenhao Zheng
Zhaoyang Wang
Xiao Wang
Xuchao Zhang
Chetan Bansal
Marc Niethammer
Junzhou Huang
Hongtu Zhu
Yun-Qing Li
Jimeng Sun
Zongyuan Ge
Gang Li
James Zou
Huaxiu Yao
    MU
    VLM
ArXivPDFHTML
Abstract

Artificial intelligence has significantly impacted medical applications, particularly with the advent of Medical Large Vision Language Models (Med-LVLMs), sparking optimism for the future of automated and personalized healthcare. However, the trustworthiness of Med-LVLMs remains unverified, posing significant risks for future model deployment. In this paper, we introduce CARES and aim to comprehensively evaluate the Trustworthiness of Med-LVLMs across the medical domain. We assess the trustworthiness of Med-LVLMs across five dimensions, including trustfulness, fairness, safety, privacy, and robustness. CARES comprises about 41K question-answer pairs in both closed and open-ended formats, covering 16 medical image modalities and 27 anatomical regions. Our analysis reveals that the models consistently exhibit concerns regarding trustworthiness, often displaying factual inaccuracies and failing to maintain fairness across different demographic groups. Furthermore, they are vulnerable to attacks and demonstrate a lack of privacy awareness. We publicly release our benchmark and code in https://github.com/richard-peng-xia/CARES.

View on arXiv
Comments on this paper