28

HalluHard: A Hard Multi-Turn Hallucination Benchmark

Dongyang Fan
Sebastien Delsad
Nicolas Flammarion
Maksym Andriushchenko
Main:8 Pages
9 Figures
Bibliography:4 Pages
13 Tables
Appendix:11 Pages
Abstract

Large language models (LLMs) still produce plausible-sounding but ungrounded factual claims, a problem that worsens in multi-turn dialogue as context grows and early errors cascade. We introduce HalluHard\textbf{HalluHard}, a challenging multi-turn hallucination benchmark with 950 seed questions spanning four high-stakes domains: legal cases, research questions, medical guidelines, and coding. We operationalize groundedness by requiring inline citations for factual assertions. To support reliable evaluation in open-ended settings, we propose a judging pipeline that iteratively retrieves evidence via web search. It can fetch, filter, and parse full-text sources (including PDFs) to assess whether cited material actually supports the generated content. Across a diverse set of frontier proprietary and open-weight models, hallucinations remain substantial even with web search (30%\approx 30\% for the strongest configuration, Opus-4.5 with web search), with content-grounding errors persisting at high rates. Finally, we show that hallucination behavior is shaped by model capacity, turn position, effective reasoning, and the type of knowledge required.

View on arXiv
Comments on this paper