ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.15318
26
2

On the Complexity of Neural Computation in Superposition

5 September 2024
Micah Adler
Nir Shavit
ArXivPDFHTML
Abstract

Superposition, the ability of neural networks to represent more features than neurons, is increasingly seen as key to the efficiency of large models. This paper investigates the theoretical foundations of computing in superposition, establishing complexity bounds for explicit, provably correct algorithms.We present the first lower bounds for a neural network computing in superposition, showing that for a broad class of problems, including permutations and pairwise logical operations, computing m′m'm′ features in superposition requires at least Ω(m′log⁡m′)\Omega(\sqrt{m' \log m'})Ω(m′logm′​) neurons and Ω(m′log⁡m′)\Omega(m' \log m')Ω(m′logm′) parameters. This implies the first subexponential upper bound on superposition capacity: a network with nnn neurons can compute at most O(n2/log⁡n)O(n^2 / \log n)O(n2/logn) features. Conversely, we provide a nearly tight constructive upper bound: logical operations like pairwise AND can be computed using O(m′log⁡m′)O(\sqrt{m'} \log m')O(m′​logm′) neurons and O(m′log⁡2m′)O(m' \log^2 m')O(m′log2m′) parameters. There is thus an exponential gap between the complexity of computing in superposition (the subject of this work) versus merely representing features, which can require as little as O(log⁡m′)O(\log m')O(logm′) neurons based on the Johnson-Lindenstrauss Lemma.Our hope is that our results open a path for using complexity theoretic techniques in neural network interpretability research.

View on arXiv
@article{adler2025_2409.15318,
  title={ On the Complexity of Neural Computation in Superposition },
  author={ Micah Adler and Nir Shavit },
  journal={arXiv preprint arXiv:2409.15318},
  year={ 2025 }
}
Comments on this paper