23
20

LitLLM: A Toolkit for Scientific Literature Review

Shubham Agarwal
I. Laradji
Laurent Charlin
Christopher Pal
Krishnamurthy DJ Dvijotham
Jason Stanley
Laurent Charlin
Christopher Pal
Abstract

Conducting literature reviews for scientific papers is essential for understanding research, its limitations, and building on existing work. It is a tedious task which makes an automatic literature review generator appealing. Unfortunately, many existing works that generate such reviews using Large Language Models (LLMs) have significant limitations. They tend to hallucinate-generate non-factual information-and ignore the latest research they have not been trained on. To address these limitations, we propose a toolkit that operates on Retrieval Augmented Generation (RAG) principles, specialized prompting and instructing techniques with the help of LLMs. Our system first initiates a web search to retrieve relevant papers by summarizing user-provided abstracts into keywords using an off-the-shelf LLM. Authors can enhance the search by supplementing it with relevant papers or keywords, contributing to a tailored retrieval process. Second, the system re-ranks the retrieved papers based on the user-provided abstract. Finally, the related work section is generated based on the re-ranked results and the abstract. There is a substantial reduction in time and effort for literature review compared to traditional methods, establishing our toolkit as an efficient alternative. Our project page including the demo and toolkit can be accessed here:this https URL

View on arXiv
@article{agarwal2025_2402.01788,
  title={ LitLLM: A Toolkit for Scientific Literature Review },
  author={ Shubham Agarwal and Gaurav Sahu and Abhay Puri and Issam H. Laradji and Krishnamurthy DJ Dvijotham and Jason Stanley and Laurent Charlin and Christopher Pal },
  journal={arXiv preprint arXiv:2402.01788},
  year={ 2025 }
}
Comments on this paper