ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.20320
177
2

Less Context, Same Performance: A RAG Framework for Resource-Efficient LLM-Based Clinical NLP

23 May 2025
Satya Narayana Cheetirala
Ganesh Raut
Dhavalkumar Patel
Fabio Sanatana
Robert Freeman
Matthew A Levin
Girish N. Nadkarni
Omar Dawkins
Reba Miller
Randolph M. Steinhagen
Eyal Klang
Prem Timsina
    RALM
ArXiv (abs)PDFHTML
Main:5 Pages
9 Figures
Bibliography:1 Pages
Appendix:1 Pages
Abstract

Long text classification is challenging for Large Language Models (LLMs) due to token limits and high computational costs. This study explores whether a Retrieval Augmented Generation (RAG) approach using only the most relevant text segments can match the performance of processing entire clinical notes with large context LLMs. We begin by splitting clinical documents into smaller chunks, converting them into vector embeddings, and storing these in a FAISS index. We then retrieve the top 4,000 words most pertinent to the classification query and feed these consolidated segments into an LLM. We evaluated three LLMs (GPT4o, LLaMA, and Mistral) on a surgical complication identification task. Metrics such as AUC ROC, precision, recall, and F1 showed no statistically significant differences between the RAG based approach and whole-text processing (p > 0.05p > 0.05). These findings indicate that RAG can significantly reduce token usage without sacrificing classification accuracy, providing a scalable and cost effective solution for analyzing lengthy clinical documents.

View on arXiv
Comments on this paper