ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.06464
31
8

Transforming Wearable Data into Health Insights using Large Language Model Agents

10 June 2024
Mike A. Merrill
Akshay Paruchuri
Naghmeh Rezaei
Geza Kovacs
Javier Perez
Yun-hui Liu
Erik Schenck
Nova Hammerquist
Jake Sunshine
Shyam Tailor
Kumar Ayush
Hao-Wei Su
Qian He
Cory Y. McLean
Mark Malhotra
Shwetak Patel
Jiening Zhan
Tim Althoff
Daniel J. McDuff
Xin Liu
    LM&MA
    LLMAG
    AI4CE
ArXivPDFHTML
Abstract

Despite the proliferation of wearable health trackers and the importance of sleep and exercise to health, deriving actionable personalized insights from wearable data remains a challenge because doing so requires non-trivial open-ended analysis of these data. The recent rise of large language model (LLM) agents, which can use tools to reason about and interact with the world, presents a promising opportunity to enable such personalized analysis at scale. Yet, the application of LLM agents in analyzing personal health is still largely untapped. In this paper, we introduce the Personal Health Insights Agent (PHIA), an agent system that leverages state-of-the-art code generation and information retrieval tools to analyze and interpret behavioral health data from wearables. We curate two benchmark question-answering datasets of over 4000 health insights questions. Based on 650 hours of human and expert evaluation we find that PHIA can accurately address over 84% of factual numerical questions and more than 83% of crowd-sourced open-ended questions. This work has implications for advancing behavioral health across the population, potentially enabling individuals to interpret their own wearable data, and paving the way for a new era of accessible, personalized wellness regimens that are informed by data-driven insights.

View on arXiv
Comments on this paper