ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.01992
57
0

Ask, and it shall be given: On the Turing completeness of prompting

24 February 2025
Ruizhong Qiu
Zhe Xu
W. Bao
Hanghang Tong
    ReLM
    LRM
    AI4CE
ArXivPDFHTML
Abstract

Since the success of GPT, large language models (LLMs) have been revolutionizing machine learning and have initiated the so-called LLM prompting paradigm. In the era of LLMs, people train a single general-purpose LLM and provide the LLM with different prompts to perform different tasks. However, such empirical success largely lacks theoretical understanding. Here, we present the first theoretical study on the LLM prompting paradigm to the best of our knowledge. In this work, we show that prompting is in fact Turing-complete: there exists a finite-size Transformer such that for any computable function, there exists a corresponding prompt following which the Transformer computes the function. Furthermore, we show that even though we use only a single finite-size Transformer, it can still achieve nearly the same complexity bounds as that of the class of all unbounded-size Transformers. Overall, our result reveals that prompting can enable a single finite-size Transformer to be efficiently universal, which establishes a theoretical underpinning for prompt engineering in practice.

View on arXiv
@article{qiu2025_2411.01992,
  title={ Ask, and it shall be given: On the Turing completeness of prompting },
  author={ Ruizhong Qiu and Zhe Xu and Wenxuan Bao and Hanghang Tong },
  journal={arXiv preprint arXiv:2411.01992},
  year={ 2025 }
}
Comments on this paper