ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24832
27
0
v1v2v3 (latest)

How much do language models memorize?

30 May 2025
John X. Morris
Chawin Sitawarin
Chuan Guo
Narine Kokhlikyan
G. E. Suh
Alexander M. Rush
Kamalika Chaudhuri
Saeed Mahloujifar
    KELMELM
ArXiv (abs)PDFHTML
Main:10 Pages
3 Figures
Bibliography:4 Pages
2 Tables
Appendix:7 Pages
Abstract

We propose a new method for estimating how much a model knows about a datapoint and use it to measure the capacity of modern language models. Prior studies of language model memorization have struggled to disentangle memorization from generalization. We formally separate memorization into two components: unintended memorization, the information a model contains about a specific dataset, and generalization, the information a model contains about the true data-generation process. When we completely eliminate generalization, we can compute the total memorization, which provides an estimate of model capacity: our measurements estimate that GPT-style models have a capacity of approximately 3.6 bits per parameter. We train language models on datasets of increasing size and observe that models memorize until their capacity fills, at which point "grokking" begins, and unintended memorization decreases as models begin to generalize. We train hundreds of transformer language models ranging from 500K500K500K to 1.5B1.5B1.5B parameters and produce a series of scaling laws relating model capacity and data size to membership inference.

View on arXiv
@article{morris2025_2505.24832,
  title={ How much do language models memorize? },
  author={ John X. Morris and Chawin Sitawarin and Chuan Guo and Narine Kokhlikyan and G. Edward Suh and Alexander M. Rush and Kamalika Chaudhuri and Saeed Mahloujifar },
  journal={arXiv preprint arXiv:2505.24832},
  year={ 2025 }
}
Comments on this paper