ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.10833
127
2
v1v2 (latest)

UI-Venus Technical Report: Building High-performance UI Agents with RFT

14 August 2025
Zhangxuan Gu
Zhengwen Zeng
Zhenyu Xu
Xingran Zhou
Shuheng Shen
Yunfei Liu
Beitong Zhou
Changhua Meng
Tianyu Xia
Weizhi Chen
Yue Wen
Jingya Dou
Fei Tang
Jinzhen Lin
Y. Liu
Zhenlin Guo
Yichen Gong
Heng Jia
Changlong Gao
Yuan Guo
Yong Deng
Zhenyu Guo
Liang Chen
Weiqiang Wang
    LLMAGLM&Ro
ArXiv (abs)PDFHTMLHuggingFace (38 upvotes)Github (476★)
Main:21 Pages
11 Figures
Bibliography:3 Pages
10 Tables
Appendix:9 Pages
Abstract

We present UI-Venus, a native UI agent that takes only screenshots as input based on a multimodal large language model. UI-Venus achieves SOTA performance on both UI grounding and navigation tasks using only several hundred thousand high-quality training samples through reinforcement finetune (RFT) based on Qwen2.5-VL. Specifically, the 7B and 72B variants of UI-Venus obtain 94.1% / 50.8% and 95.3% / 61.9% on the standard grounding benchmarks, i.e., Screenspot-V2 / Pro, surpassing the previous SOTA baselines including open-source GTA1 and closed-source this http URL show UI-Venus's summary and planing ability, we also evaluate it on the AndroidWorld, an online UI navigation arena, on which our 7B and 72B variants achieve 49.1% and 65.9% success rate, also beating existing this http URL achieve this, we introduce carefully designed reward functions for both UI grounding and navigation tasks and corresponding efficient data cleaning this http URL further boost navigation performance, we propose Self-Evolving Trajectory History Alignment \& Sparse Action Enhancement that refine historical reasoning traces and balances the distribution of sparse but critical actions, leading to more coherent planning and better generalization in complex UI tasks. Our contributions include the publish of SOTA open-source UI agents, comprehensive data cleaning protocols and a novel self-evolving framework for improving navigation performance, which encourage further research and development in the community. Code is available at this https URL.

View on arXiv
Comments on this paper