ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11167
38
0

SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors

16 February 2025
Bohan Lyu
Siqiao Huang
Zichen Liang
Qi-An Sun
Jiaming Zhang
    ELM
    LRM
ArXivPDFHTML
Abstract

Neural surrogate models have emerged as powerful and efficient tools in data mining. Meanwhile, large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks. We investigate a novel application: using LLMs as surrogate models for code execution prediction. Given LLMs' unique ability to understand and process diverse programs, they present a promising direction for building general-purpose surrogate models. To systematically investigate this capability, we introduce SURGE, a comprehensive benchmark with 116011601160 problems covering 888 key aspects: multi-language programming tasks, competition-level programming problems, repository-level code analysis, high-cost scientific computing, time-complexity-intensive algorithms, buggy code analysis, programs dependent on specific compilers or execution environments, and formal mathematical proof verification. Through extensive empirical analysis of 212121 open-source and proprietary LLMs, we examine scaling laws, data efficiency, and predictive accuracy. Our findings reveal important insights about the feasibility of LLMs as efficient surrogates for computational processes, with implications for automated software testing, program analysis, and computational resource optimization in data mining applications. Code and dataset are released atthis https URL.

View on arXiv
@article{lyu2025_2502.11167,
  title={ SURGE: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors },
  author={ Bohan Lyu and Siqiao Huang and Zichen Liang and Qi-An Sun and Jiaming Zhang },
  journal={arXiv preprint arXiv:2502.11167},
  year={ 2025 }
}
Comments on this paper