ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09816
16
0

Slow Transition to Low-Dimensional Chaos in Heavy-Tailed Recurrent Neural Networks

14 May 2025
Yi Xie
Stefan Mihalas
Łukasz Kuśmierz
ArXivPDFHTML
Abstract

Growing evidence suggests that synaptic weights in the brain follow heavy-tailed distributions, yet most theoretical analyses of recurrent neural networks (RNNs) assume Gaussian connectivity. We systematically study the activity of RNNs with random weights drawn from biologically plausible Lévy alpha-stable distributions. While mean-field theory for the infinite system predicts that the quiescent state is always unstable -- implying ubiquitous chaos -- our finite-size analysis reveals a sharp transition between quiescent and chaotic dynamics. We theoretically predict the gain at which the system transitions from quiescent to chaotic dynamics, and validate it through simulations. Compared to Gaussian networks, heavy-tailed RNNs exhibit a broader parameter regime near the edge of chaos, namely a slow transition to chaos. However, this robustness comes with a tradeoff: heavier tails reduce the Lyapunov dimension of the attractor, indicating lower effective dimensionality. Our results reveal a biologically aligned tradeoff between the robustness of dynamics near the edge of chaos and the richness of high-dimensional neural activity. By analytically characterizing the transition point in finite-size networks -- where mean-field theory breaks down -- we provide a tractable framework for understanding dynamics in realistically sized, heavy-tailed neural circuits.

View on arXiv
@article{xie2025_2505.09816,
  title={ Slow Transition to Low-Dimensional Chaos in Heavy-Tailed Recurrent Neural Networks },
  author={ Yi Xie and Stefan Mihalas and Łukasz Kuśmierz },
  journal={arXiv preprint arXiv:2505.09816},
  year={ 2025 }
}
Comments on this paper