ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.20906
45
75

Automated Review Generation Method Based on Large Language Models

30 July 2024
Shican Wu
Xiao Ma
Dehui Luo
Lulu Li
Xiangcheng Shi
Xin Chang
Xiaoyun Lin
Ran Luo
Chunlei Pei
Zhijian Zhao
Zhi-Jian Zhao
Jinlong Gong
ArXivPDFHTML
Abstract

Literature research, vital for scientific work, faces the challenge of surging information volumes exceeding researchers' processing capabilities. We present an automated review generation method based on large language models (LLMs) to overcome efficiency bottlenecks and reduce cognitive load. Our statistically validated evaluation framework demonstrates that the generated reviews match or exceed manual quality, offering broad applicability across research fields without requiring users' domain knowledge. Applied to propane dehydrogenation (PDH) catalysts, our method swiftly analyzed 343 articles, averaging seconds per article per LLM account, producing comprehensive reviews spanning 35 topics, with extended analysis of 1041 articles providing insights into catalysts' properties. Through multi-layered quality control, we effectively mitigated LLMs' hallucinations, with expert verification confirming accuracy and citation integrity while demonstrating hallucination risks reduced to below 0.5\% with 95\% confidence. Released Windows application enables one-click review generation, enhancing research productivity and literature recommendation efficiency while setting the stage for broader scientific explorations.

View on arXiv
@article{wu2025_2407.20906,
  title={ Automated Review Generation Method Based on Large Language Models },
  author={ Shican Wu and Xiao Ma and Dehui Luo and Lulu Li and Xiangcheng Shi and Xin Chang and Xiaoyun Lin and Ran Luo and Chunlei Pei and Changying Du and Zhi-Jian Zhao and Jinlong Gong },
  journal={arXiv preprint arXiv:2407.20906},
  year={ 2025 }
}
Comments on this paper