ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13281
47
0

LLM-Match: An Open-Sourced Patient Matching Model Based on Large Language Models and Retrieval-Augmented Generation

17 March 2025
X. Li
Shaika Chowdhury
Chung Il Wi
Maria Vassilaki
Ken Liu
Terence T Sio
Owen Garrick
Young J Juhn
James R Cerhan
Cui Tao
Nansu Zong
    LM&MA
ArXivPDFHTML
Abstract

Patient matching is the process of linking patients to appropriate clinical trials by accurately identifying and matching their medical records with trial eligibility criteria. We propose LLM-Match, a novel framework for patient matching leveraging fine-tuned open-source large language models. Our approach consists of four key components. First, a retrieval-augmented generation (RAG) module extracts relevant patient context from a vast pool of electronic health records (EHRs). Second, a prompt generation module constructs input prompts by integrating trial eligibility criteria (both inclusion and exclusion criteria), patient context, and system instructions. Third, a fine-tuning module with a classification head optimizes the model parameters using structured prompts and ground-truth labels. Fourth, an evaluation module assesses the fine-tuned model's performance on the testing datasets. We evaluated LLM-Match on four open datasets - n2c2, SIGIR, TREC 2021, and TREC 2022 - using open-source models, comparing it against TrialGPT, Zero-Shot, and GPT-4-based closed models. LLM-Match outperformed all baselines.

View on arXiv
@article{li2025_2503.13281,
  title={ LLM-Match: An Open-Sourced Patient Matching Model Based on Large Language Models and Retrieval-Augmented Generation },
  author={ Xiaodi Li and Shaika Chowdhury and Chung Il Wi and Maria Vassilaki and Xiaoke Liu and Terence T Sio and Owen Garrick and Young J Juhn and James R Cerhan and Cui Tao and Nansu Zong },
  journal={arXiv preprint arXiv:2503.13281},
  year={ 2025 }
}
Comments on this paper