198

A Women's Health Benchmark for Large Language Models

Victoria-Elisabeth Gruber
Razvan Marinescu
Diego Fajardo
Amin H. Nassar
Christopher Arkfeld
Alexandria Ludlow
Shama Patel
Mehrnoosh Samaei
Valerie Klug
Anna Huber
Marcel Gühner
Albert Botta i Orfila
Irene Lagoja
Kimya Tarr
Haleigh Larson
Mary Beth Howard
Main:11 Pages
7 Figures
Bibliography:3 Pages
3 Tables
Appendix:1 Pages
Abstract

As large language models (LLMs) become primary sources of health information for millions, their accuracy in women's health remains critically unexamined. We introduce the Women's Health Benchmark (WHB), the first benchmark evaluating LLM performance specifically in women's health. Our benchmark comprises 96 rigorously validated model stumps covering five medical specialties (obstetrics and gynecology, emergency medicine, primary care, oncology, and neurology), three query types (patient query, clinician query, and evidence/policy query), and eight error types (dosage/medication errors, missing critical information, outdated guidelines/treatment recommendations, incorrect treatment advice, incorrect factual information, missing/incorrect differential diagnosis, missed urgency, and inappropriate recommendations). We evaluated 13 state-of-the-art LLMs and revealed alarming gaps: current models show approximately 60\% failure rates on the women's health benchmark, with performance varying dramatically across specialties and error types. Notably, models universally struggle with "missed urgency" indicators, while newer models like GPT-5 show significant improvements in avoiding inappropriate recommendations. Our findings underscore that AI chatbots are not yet fully able of providing reliable advice in women's health.

View on arXiv
Comments on this paper