Provable Failure of Language Models in Learning Majority Boolean Logic via Gradient Descent

Recent advancements in Transformer-based architectures have led to impressive breakthroughs in natural language processing tasks, with models such as GPT-4, Claude, and Gemini demonstrating human-level reasoning abilities. However, despite their high performance, concerns remain about the inherent limitations of these models, especially when it comes to learning basic logical functions. While complexity-theoretic analyses indicate that Transformers can represent simple logic functions (e.g., , , and majority gates) by its nature of belonging to the class, these results assume ideal parameter settings and do not account for the constraints imposed by gradient descent-based training methods. In this work, we investigate whether Transformers can truly learn simple majority functions when trained using gradient-based methods. We focus on a simplified variant of the Transformer architecture and consider both and number of training samples, where each sample is a -size binary string paired with the output of a basic majority function. Our analysis demonstrates that even after gradient queries, the generalization error of the Transformer model still remains substantially large, growing exponentially with . This work highlights fundamental optimization challenges in training Transformers for the simplest logical reasoning tasks and provides new insights into their theoretical limitations.
View on arXiv@article{chen2025_2504.04702, title={ Provable Failure of Language Models in Learning Majority Boolean Logic via Gradient Descent }, author={ Bo Chen and Zhenmei Shi and Zhao Song and Jiahao Zhang }, journal={arXiv preprint arXiv:2504.04702}, year={ 2025 } }