ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.18906
48
2

VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model

26 February 2025
Jiani Zheng
Lu Wang
Fangkai Yang
C. Zhang
Lingrui Mei
Wenjie Yin
Qingwei Lin
Dongmei Zhang
Saravan Rajmohan
Qi Zhang
    OffRL
ArXivPDFHTML
Abstract

Training Vision-Language Models (VLMs) for Graphical User Interfaces (GUI) agents via Reinforcement Learning (RL) faces critical challenges: environment-based RL requires costly interactions, while environment-free methods struggle with distribution shift and reward generalization. We propose an environment-free RL framework that decouples value estimation from policy optimization by leveraging a pretrained Value Environment Model (VEM). VEM predicts state-action values directly from offline data, distilling human-like priors about GUI interaction outcomes without requiring next-state prediction or environmental feedback. This avoids compounding errors and enhances resilience to UI changes by focusing on semantic reasoning (e.g., Does this action advance the user's goal?). The framework operates in two stages: (1) pretraining VEM to estimate long-term action utilities and (2) guiding policy exploration with frozen VEM signals, enabling layout-agnostic GUI automation. Evaluated on Android-in-the-Wild benchmarks, VEM achieves state-of-the-art performance in both offline and online settings, outperforming environment-free baselines significantly and matching environment-based approaches without interaction costs. Importantly, VEM demonstrates that semantic-aware value estimation can achieve comparable performance with online-trained methods.

View on arXiv
@article{zheng2025_2502.18906,
  title={ VEM: Environment-Free Exploration for Training GUI Agent with Value Environment Model },
  author={ Jiani Zheng and Lu Wang and Fangkai Yang and Chaoyun Zhang and Lingrui Mei and Wenjie Yin and Qingwei Lin and Dongmei Zhang and Saravan Rajmohan and Qi Zhang },
  journal={arXiv preprint arXiv:2502.18906},
  year={ 2025 }
}
Comments on this paper