ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10887
47
0

InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction

16 May 2025
Bin Lei
Weitai Kang
Zijian Zhang
Winson Chen
Xi Xie
Shan Zuo
Mimi Xie
Ali Payani
Mingyi Hong
Yan Yan
Caiwen Ding
    LLMAG
    LM&Ro
ArXivPDFHTML
Abstract

This paper introduces \textsc{InfantAgent-Next}, a generalist agent capable of interacting with computers in a multimodal manner, encompassing text, images, audio, and video. Unlike existing approaches that either build intricate workflows around a single large model or only provide workflow modularity, our agent integrates tool-based and pure vision agents within a highly modular architecture, enabling different models to collaboratively solve decoupled tasks in a step-by-step manner. Our generality is demonstrated by our ability to evaluate not only pure vision-based real-world benchmarks (i.e., OSWorld), but also more general or tool-intensive benchmarks (e.g., GAIA and SWE-Bench). Specifically, we achieve 7.27%\mathbf{7.27\%}7.27% accuracy on OSWorld, higher than Claude-Computer-Use. Codes and evaluation scripts are open-sourced atthis https URL.

View on arXiv
@article{lei2025_2505.10887,
  title={ InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction },
  author={ Bin Lei and Weitai Kang and Zijian Zhang and Winson Chen and Xi Xie and Shan Zuo and Mimi Xie and Ali Payani and Mingyi Hong and Yan Yan and Caiwen Ding },
  journal={arXiv preprint arXiv:2505.10887},
  year={ 2025 }
}
Comments on this paper