9

OdysseyArena: Benchmarking Large Language Models For Long-Horizon, Active and Inductive Interactions

Fangzhi Xu
Hang Yan
Qiushi Sun
Jinyang Wu
Zixian Huang
Muye Huang
Jingyang Gong
Zichen Ding
Kanzhi Cheng
Yian Wang
Xinyu Che
Zeyi Sun
Jian Zhang
Zhangyue Yin
Haoran Luo
Xuanjing Huang
Ben Kao
Jun Liu
Qika Lin
Main:8 Pages
15 Figures
Bibliography:3 Pages
7 Tables
Appendix:23 Pages
Abstract

The rapid advancement of Large Language Models (LLMs) has catalyzed the development of autonomous agents capable of navigating complex environments. However, existing evaluations primarily adopt a deductive paradigm, where agents execute tasks based on explicitly provided rules and static goals, often within limited planning horizons. Crucially, this neglects the inductive necessity for agents to discover latent transition laws from experience autonomously, which is the cornerstone for enabling agentic foresight and sustaining strategic coherence. To bridge this gap, we introduce OdysseyArena, which re-centers agent evaluation on long-horizon, active, and inductive interactions. We formalize and instantiate four primitives, translating abstract transition dynamics into concrete interactive environments. Building upon this, we establish OdysseyArena-Lite for standardized benchmarking, providing a set of 120 tasks to measure an agent's inductive efficiency and long-horizon discovery. Pushing further, we introduce OdysseyArena-Challenge to stress-test agent stability across extreme interaction horizons (e.g., > 200 steps). Extensive experiments on 15+ leading LLMs reveal that even frontier models exhibit a deficiency in inductive scenarios, identifying a critical bottleneck in the pursuit of autonomous discovery in complex environments. Our code and data are available atthis https URL

View on arXiv
Comments on this paper