ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.20666
78
0

TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews

26 March 2025
Huimin Xu
Seungjun Yi
Terence Lim
Jiawei Xu
Andrew Well
Carlos M. Mery
Aidong Zhang
Y. Zhang
Heng Ji
Keshav Pingali
Yan Leng
Ying Ding
    LLMAG
ArXivPDFHTML
Abstract

Thematic analysis (TA) is a widely used qualitative approach for uncovering latent meanings in unstructured text data. TA provides valuable insights in healthcare but is resource-intensive. Large Language Models (LLMs) have been introduced to perform TA, yet their applications in healthcare remain unexplored. Here, we propose TAMA: A Human-AI Collaborative Thematic Analysis framework using Multi-Agent LLMs for clinical interviews. We leverage the scalability and coherence of multi-agent systems through structured conversations between agents and coordinate the expertise of cardiac experts in TA. Using interview transcripts from parents of children with Anomalous Aortic Origin of a Coronary Artery (AAOCA), a rare congenital heart disease, we demonstrate that TAMA outperforms existing LLM-assisted TA approaches, achieving higher thematic hit rate, coverage, and distinctiveness. TAMA demonstrates strong potential for automated TA in clinical settings by leveraging multi-agent LLM systems with human-in-the-loop integration by enhancing quality while significantly reducing manual workload.

View on arXiv
@article{xu2025_2503.20666,
  title={ TAMA: A Human-AI Collaborative Thematic Analysis Framework Using Multi-Agent LLMs for Clinical Interviews },
  author={ Huimin Xu and Seungjun Yi and Terence Lim and Jiawei Xu and Andrew Well and Carlos Mery and Aidong Zhang and Yuji Zhang and Heng Ji and Keshav Pingali and Yan Leng and Ying Ding },
  journal={arXiv preprint arXiv:2503.20666},
  year={ 2025 }
}
Comments on this paper