This paper introduces \textsc{InfantAgent-Next}, a generalist agent capable of interacting with computers in a multimodal manner, encompassing text, images, audio, and video. Unlike existing approaches that either build intricate workflows around a single large model or only provide workflow modularity, our agent integrates tool-based and pure vision agents within a highly modular architecture, enabling different models to collaboratively solve decoupled tasks in a step-by-step manner. Our generality is demonstrated by our ability to evaluate not only pure vision-based real-world benchmarks (i.e., OSWorld), but also more general or tool-intensive benchmarks (e.g., GAIA and SWE-Bench). Specifically, we achieve accuracy on OSWorld, higher than Claude-Computer-Use. Codes and evaluation scripts are open-sourced atthis https URL.
View on arXiv@article{lei2025_2505.10887, title={ InfantAgent-Next: A Multimodal Generalist Agent for Automated Computer Interaction }, author={ Bin Lei and Weitai Kang and Zijian Zhang and Winson Chen and Xi Xie and Shan Zuo and Mimi Xie and Ali Payani and Mingyi Hong and Yan Yan and Caiwen Ding }, journal={arXiv preprint arXiv:2505.10887}, year={ 2025 } }