145
v1v2 (latest)

AI Agents as Universal Task Solvers

Main:26 Pages
1 Figures
Bibliography:5 Pages
2 Tables
Appendix:1 Pages
Abstract

We describe AI agents as stochastic dynamical systems and frame the problem of learning to reason as in transductive inference: Rather than approximating the distribution of past data as in classical induction, the objective is to capture its algorithmic structure so as to reduce the time needed to solve new tasks. In this view, information from past experience serves not only to reduce a model's uncertainty - as in Shannon's classical theory - but to reduce the computational effort required to find solutions to unforeseen tasks. Working in the verifiable setting, where a checker or reward function is available, we establish three main results. First, we show that the optimal speed-up on a new task is tightly related to the algorithmic information it shares with the training data, yielding a theoretical justification for the power-law scaling empirically observed in reasoning models. Second, while the compression view of learning, rooted in Occam's Razor, favors simplicity, we show that transductive inference yields its greatest benefits precisely when the data-generating mechanism is most complex. Third, we identify a possible failure mode of naive scaling: in the limit of unbounded model size and compute, models with access to a reward signal can behave as savants - brute-forcing solutions without acquiring transferable reasoning strategies. Accordingly, we argue that a critical quantity to optimize when scaling reasoning models is time, whose role in learning has remained largely unexplored.

View on arXiv
Comments on this paper