The rise of Large Language Models (LLMs) has revolutionized Graphical User Interface (GUI) automation through LLM-powered GUI agents, yet their ability to process sensitive data with limited human oversight raises significant privacy and security risks. This position paper identifies three key risks of GUI agents and examines how they differ from traditional GUI automation and general autonomous agents. Despite these risks, existing evaluations focus primarily on performance, leaving privacy and security assessments largely unexplored. We review current evaluation metrics for both GUI and general LLM agents and outline five key challenges in integrating human evaluators for GUI agent assessments. To address these gaps, we advocate for a human-centered evaluation framework that incorporates risk assessments, enhances user awareness through in-context consent, and embeds privacy and security considerations into GUI agent design and evaluation.
View on arXiv@article{chen2025_2504.17934, title={ Toward a Human-Centered Evaluation Framework for Trustworthy LLM-Powered GUI Agents }, author={ Chaoran Chen and Zhiping Zhang and Ibrahim Khalilov and Bingcan Guo and Simret A Gebreegziabher and Yanfang Ye and Ziang Xiao and Yaxing Yao and Tianshi Li and Toby Jia-Jun Li }, journal={arXiv preprint arXiv:2504.17934}, year={ 2025 } }