ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.04716
31
0

On the Robustness of GUI Grounding Models Against Image Attacks

7 April 2025
Haoren Zhao
Tianyi Chen
Zhen Wang
    AAML
ArXivPDFHTML
Abstract

Graphical User Interface (GUI) grounding models are crucial for enabling intelligent agents to understand and interact with complex visual interfaces. However, these models face significant robustness challenges in real-world scenarios due to natural noise and adversarial perturbations, and their robustness remains underexplored. In this study, we systematically evaluate the robustness of state-of-the-art GUI grounding models, such as UGround, under three conditions: natural noise, untargeted adversarial attacks, and targeted adversarial attacks. Our experiments, which were conducted across a wide range of GUI environments, including mobile, desktop, and web interfaces, have clearly demonstrated that GUI grounding models exhibit a high degree of sensitivity to adversarial perturbations and low-resolution conditions. These findings provide valuable insights into the vulnerabilities of GUI grounding models and establish a strong benchmark for future research aimed at enhancing their robustness in practical applications. Our code is available atthis https URL.

View on arXiv
@article{zhao2025_2504.04716,
  title={ On the Robustness of GUI Grounding Models Against Image Attacks },
  author={ Haoren Zhao and Tianyi Chen and Zhen Wang },
  journal={arXiv preprint arXiv:2504.04716},
  year={ 2025 }
}
Comments on this paper