ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.02592
10
0

WebSailor: Navigating Super-human Reasoning for Web Agent

3 July 2025
Kuan Li
Zhongwang Zhang
Huifeng Yin
Liwen Zhang
Litu Ou
Jialong Wu
Wenbiao Yin
Baixuan Li
Zhengwei Tao
Xinyu Wang
Weizhou Shen
Junkai Zhang
Dingchu Zhang
Xixi Wu
Yong Jiang
Ming Yan
Pengjun Xie
Fei Huang
Jingren Zhou
ArXiv (abs)PDFHTML
Main:19 Pages
6 Figures
Bibliography:4 Pages
2 Tables
Abstract

Transcending human cognitive limitations represents a critical frontier in LLM training. Proprietary agentic systems like DeepResearch have demonstrated superhuman capabilities on extremely complex information-seeking benchmarks such as BrowseComp, a feat previously unattainable. We posit that their success hinges on a sophisticated reasoning pattern absent in open-source models: the ability to systematically reduce extreme uncertainty when navigating vast information landscapes. Based on this insight, we introduce WebSailor, a complete post-training methodology designed to instill this crucial capability. Our approach involves generating novel, high-uncertainty tasks through structured sampling and information obfuscation, RFT cold start, and an efficient agentic RL training algorithm, Duplicating Sampling Policy Optimization (DUPO). With this integrated pipeline, WebSailor significantly outperforms all opensource agents in complex information-seeking tasks, matching proprietary agents' performance and closing the capability gap.

View on arXiv
@article{li2025_2507.02592,
  title={ WebSailor: Navigating Super-human Reasoning for Web Agent },
  author={ Kuan Li and Zhongwang Zhang and Huifeng Yin and Liwen Zhang and Litu Ou and Jialong Wu and Wenbiao Yin and Baixuan Li and Zhengwei Tao and Xinyu Wang and Weizhou Shen and Junkai Zhang and Dingchu Zhang and Xixi Wu and Yong Jiang and Ming Yan and Pengjun Xie and Fei Huang and Jingren Zhou },
  journal={arXiv preprint arXiv:2507.02592},
  year={ 2025 }
}
Comments on this paper