45
1

CHBench: A Chinese Dataset for Evaluating Health in Large Language Models

Abstract

With the rapid development of large language models (LLMs), assessing their performance on health-related inquiries has become increasingly essential. The use of these models in real-world contexts-where misinformation can lead to serious consequences for individuals seeking medical advice and support-necessitates a rigorous focus on safety and trustworthiness. In this work, we introduce CHBench, the first comprehensive safety-oriented Chinese health-related benchmark designed to evaluate LLMs' capabilities in understanding and addressing physical and mental health issues with a safety perspective across diverse scenarios. CHBench comprises 6,493 entries on mental health and 2,999 entries on physical health, spanning a wide range of topics. Our extensive evaluations of four popular Chinese LLMs highlight significant gaps in their capacity to deliver safe and accurate health information, underscoring the urgent need for further advancements in this critical domain. The code is available atthis https URL.

View on arXiv
@article{guo2025_2409.15766,
  title={ CHBench: A Chinese Dataset for Evaluating Health in Large Language Models },
  author={ Chenlu Guo and Nuo Xu and Yi Chang and Yuan Wu },
  journal={arXiv preprint arXiv:2409.15766},
  year={ 2025 }
}
Comments on this paper