57
0

Trade-offs in Data Memorization via Strong Data Processing Inequalities

Main:20 Pages
2 Figures
Bibliography:3 Pages
Appendix:16 Pages
Abstract

Recent research demonstrated that training large language models involves memorization of a significant fraction of training data. Such memorization can lead to privacy violations when training on sensitive user data and thus motivates the study of data memorization's role in learning. In this work, we develop a general approach for proving lower bounds on excess data memorization, that relies on a new connection between strong data processing inequalities and data memorization. We then demonstrate that several simple and natural binary classification problems exhibit a trade-off between the number of samples available to a learning algorithm, and the amount of information about the training data that a learning algorithm needs to memorize to be accurate. In particular, Ω(d)\Omega(d) bits of information about the training data need to be memorized when O(1)O(1) dd-dimensional examples are available, which then decays as the number of examples grows at a problem-specific rate. Further, our lower bounds are generally matched (up to logarithmic factors) by simple learning algorithms. We also extend our lower bounds to more general mixture-of-clusters models. Our definitions and results build on the work of Brown et al. (2021) and address several limitations of the lower bounds in their work.

View on arXiv
@article{feldman2025_2506.01855,
  title={ Trade-offs in Data Memorization via Strong Data Processing Inequalities },
  author={ Vitaly Feldman and Guy Kornowski and Xin Lyu },
  journal={arXiv preprint arXiv:2506.01855},
  year={ 2025 }
}
Comments on this paper