Efficient Turing Machine Simulation with Transformers
- LRM
Constant bit-size Transformers are known to be Turing complete, but existing constructions require chain-of-thought (CoT) steps per simulated Turing machine (TM) step, leading to impractical reasoning lengths. In this paper, we significantly reduce this efficiency gap by proving that any -bounded multi-tape TM can be simulated by a constant bit-size Transformer with an optimal -long context window and only CoT steps per TM step, where can be made arbitrarily small by letting the Transformers' head-layer product sufficiently large. In addition, our construction shows that sparse attention with fixed geometric offsets suffices for efficient universal computation. Our proof leverages multi-queue TMs as a bridge. The main technical novelty is a more efficient simulation of multi-tape TMs by synchronous multi-queue TMs, improving both time and space complexity under stricter model assumptions.
View on arXiv