54
0

Retrieval-augmented systems can be dangerous medical communicators

Abstract

Patients have long sought health information online, and increasingly, they are turning to generative AI to answer their health-related queries. Given the high stakes of the medical domain, techniques like retrieval-augmented generation and citation grounding have been widely promoted as methods to reduce hallucinations and improve the accuracy of AI-generated responses and have been widely adopted into search engines. This paper argues that even when these methods produce literally accurate content drawn from source documents sans hallucinations, they can still be highly misleading. Patients may derive significantly different interpretations from AI-generated outputs than they would from reading the original source material, let alone consulting a knowledgeable clinician. Through a large-scale query analysis on topics including disputed diagnoses and procedure safety, we support our argument with quantitative and qualitative evidence of the suboptimal answers resulting from current systems. In particular, we highlight how these models tend to decontextualize facts, omit critical relevant sources, and reinforce patient misconceptions or biases. We propose a series of recommendations -- such as the incorporation of communication pragmatics and enhanced comprehension of source documents -- that could help mitigate these issues and extend beyond the medical domain.

View on arXiv
@article{wong2025_2502.14898,
  title={ Retrieval-augmented systems can be dangerous medical communicators },
  author={ Lionel Wong and Ayman Ali and Raymond Xiong and Shannon Zeijang Shen and Yoon Kim and Monica Agrawal },
  journal={arXiv preprint arXiv:2502.14898},
  year={ 2025 }
}
Comments on this paper