ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.01992
  4. Cited By
Ask, and it shall be given: On the Turing completeness of prompting

Ask, and it shall be given: On the Turing completeness of prompting

24 February 2025
Ruizhong Qiu
Zhe Xu
W. Bao
Hanghang Tong
    ReLM
    LRM
    AI4CE
ArXivPDFHTML

Papers citing "Ask, and it shall be given: On the Turing completeness of prompting"

5 / 5 papers shown
Title
Finite State Automata Inside Transformers with Chain-of-Thought: A Mechanistic Study on State Tracking
Finite State Automata Inside Transformers with Chain-of-Thought: A Mechanistic Study on State Tracking
Yifan Zhang
Wenyu Du
Dongming Jin
Jie Fu
Zhi Jin
LRM
46
0
0
27 Feb 2025
How Efficient is LLM-Generated Code? A Rigorous & High-Standard Benchmark
How Efficient is LLM-Generated Code? A Rigorous & High-Standard Benchmark
Ruizhong Qiu
Weiliang Will Zeng
Hanghang Tong
James Ezick
Christopher Lott
84
15
0
20 Feb 2025
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers
Alireza Amiri
Xinting Huang
Mark Rofin
Michael Hahn
LRM
83
0
0
04 Feb 2025
Graph-Aware Isomorphic Attention for Adaptive Dynamics in Transformers
Graph-Aware Isomorphic Attention for Adaptive Dynamics in Transformers
Markus J. Buehler
AI4CE
35
1
0
04 Jan 2025
Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization
Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization
Ruizhong Qiu
Hanghang Tong
32
3
0
27 May 2024
1