50
1

DeepReview: Improving LLM-based Paper Review with Human-like Deep Thinking Process

Abstract

Large Language Models (LLMs) are increasingly utilized in scientific research assessment, particularly in automated paper review. However, existing LLM-based review systems face significant challenges, including limited domain expertise, hallucinated reasoning, and a lack of structured evaluation. To address these limitations, we introduce DeepReview, a multi-stage framework designed to emulate expert reviewers by incorporating structured analysis, literature retrieval, and evidence-based argumentation. Using DeepReview-13K, a curated dataset with structured annotations, we train DeepReviewer-14B, which outperforms CycleReviewer-70B with fewer tokens. In its best mode, DeepReviewer-14B achieves win rates of 88.21\% and 80.20\% against GPT-o1 and DeepSeek-R1 in evaluations. Our work sets a new benchmark for LLM-based paper review, with all resources publicly available. The code, model, dataset and demo have be released inthis http URL.

View on arXiv
@article{zhu2025_2503.08569,
  title={ DeepReview: Improving LLM-based Paper Review with Human-like Deep Thinking Process },
  author={ Minjun Zhu and Yixuan Weng and Linyi Yang and Yue Zhang },
  journal={arXiv preprint arXiv:2503.08569},
  year={ 2025 }
}
Comments on this paper