ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.12362
19
71

Overparameterized Neural Networks Implement Associative Memory

26 September 2019
Adityanarayanan Radhakrishnan
M. Belkin
Caroline Uhler
    BDL
ArXivPDFHTML
Abstract

Identifying computational mechanisms for memorization and retrieval of data is a long-standing problem at the intersection of machine learning and neuroscience. Our main finding is that standard overparameterized deep neural networks trained using standard optimization methods implement such a mechanism for real-valued data. Empirically, we show that: (1) overparameterized autoencoders store training samples as attractors, and thus, iterating the learned map leads to sample recovery; (2) the same mechanism allows for encoding sequences of examples, and serves as an even more efficient mechanism for memory than autoencoding. Theoretically, we prove that when trained on a single example, autoencoders store the example as an attractor. Lastly, by treating a sequence encoder as a composition of maps, we prove that sequence encoding provides a more efficient mechanism for memory than autoencoding.

View on arXiv
Comments on this paper