295
v1v2v3v4 (latest)

DEER: A Benchmark for Evaluating Deep Research Agents on Expert Report Generation

Janghoon Han
Heegyu Kim
Changho Lee
Dahm Lee
Min Hyung Park
Hosung Song
Stanley Jungkyu Choi
Moontae Lee
Honglak Lee
Main:7 Pages
10 Figures
Bibliography:6 Pages
16 Tables
Appendix:26 Pages
Abstract

Recent advances in large language models have enabled deep research systems that generate expert-level reports through multi-step reasoning and evidence-based synthesis. However, evaluating such reports remains challenging: report quality is multifaceted, making it difficult to determine what to assess and by what criteria; LLM-based judges may miss errors that require domain expertise to identify; and because deep research relies on retrieved evidence, report-wide claim verification is also necessary. To address these issues, we propose DEER, a benchmark for evaluating expert-level deep research reports. DEER systematizes evaluation criteria with an expert-developed taxonomy (7 dimensions, 25 subdimensions) operationalized as 101 fine-grained rubric items. We also provide task-specific Expert Evaluation Guidance to support LLM-based judging. Alongside rubric-based assessment, we propose a claim verification architecture that verifies both cited and uncited claims and quantifies evidence quality. Experiments show that while current deep research systems can produce structurally plausible reports that cite external evidence, there is room for improvement in fulfilling expert-level user requests and achieving logical completeness. Beyond simple performance comparisons, DEER makes system strengths and limitations interpretable and provides diagnostic signals for improvement.

View on arXiv
Comments on this paper