ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.17934
67
35

Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents

24 April 2025
C. L. P. Chen
Zhiping Zhang
Ibrahim Khalilov
Bingcan Guo
Simret Araya Gebreegziabher
Yanfang Ye
Ziang Xiao
Yaxing Yao
Tianshi Li
T. Li
    LLMAG
    ELM
ArXivPDFHTML
Abstract

The rise of Large Language Models (LLMs) has revolutionized Graphical User Interface (GUI) automation through LLM-powered GUI agents, yet their ability to process sensitive data with limited human oversight raises significant privacy and security risks. This position paper identifies three key risks of GUI agents and examines how they differ from traditional GUI automation and general autonomous agents. Despite these risks, existing evaluations focus primarily on performance, leaving privacy and security assessments largely unexplored. We review current evaluation metrics for both GUI and general LLM agents and outline five key challenges in integrating human evaluators for GUI agent assessments. To address these gaps, we advocate for a human-centered evaluation framework that incorporates risk assessments, enhances user awareness through in-context consent, and embeds privacy and security considerations into GUI agent design and evaluation.

View on arXiv
@article{chen2025_2504.17934,
  title={ Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents },
  author={ Chaoran Chen and Zhiping Zhang and Ibrahim Khalilov and Bingcan Guo and Simret A Gebreegziabher and Yanfang Ye and Ziang Xiao and Yaxing Yao and Tianshi Li and Toby Jia-Jun Li },
  journal={arXiv preprint arXiv:2504.17934},
  year={ 2025 }
}
Comments on this paper