51

Test-Time Compute Games

Ander Artola Velasco
Dimitrios Rontogiannis
Stratis Tsirtsis
Manuel Gomez-Rodriguez
Main:12 Pages
42 Figures
Bibliography:5 Pages
11 Tables
Appendix:44 Pages
Abstract

Test-time compute has emerged as a promising strategy to enhance the reasoning abilities of large language models (LLMs). However, this strategy has in turn increased how much users pay cloud-based providers offering LLM-as-a-service, since providers charge users for the amount of test-time compute they use to generate an output. In our work, we show that the market of LLM-as-a-service is socially inefficient: providers have a financial incentive to increase the amount of test-time compute, even if this increase contributes little to the quality of the outputs. To address this inefficiency, we introduce a reverse second-price auction mechanism where providers bid their offered price and (expected) quality for the opportunity to serve a user, and users pay proportionally to the marginal value generated by the winning provider relative to the second-highest bidder. To illustrate and complement our theoretical results, we conduct experiments with multiple instruct models from the Llama\texttt{Llama} and Qwen\texttt{Qwen} families, as well as reasoning models distilled from DeepSeek-R1\texttt{DeepSeek-R1}, on math and science benchmark datasets.

View on arXiv
Comments on this paper