ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.13493
29
0

Training Foundation Models as Data Compression: On Information, Model Weights and Copyright Law

18 July 2024
Giorgio Franceschelli
Claudia Cevenini
Mirco Musolesi
ArXivPDFHTML
Abstract

The training process of foundation models as for other classes of deep learning systems is based on minimizing the reconstruction error over a training set. For this reason, they are susceptible to the memorization and subsequent reproduction of training samples. In this paper, we introduce a training-as-compressing perspective, wherein the model's weights embody a compressed representation of the training data. From a copyright standpoint, this point of view implies that the weights can be considered a reproduction or, more likely, a derivative work of a potentially protected set of works. We investigate the technical and legal challenges that emerge from this framing of the copyright of outputs generated by foundation models, including their implications for practitioners and researchers. We demonstrate that adopting an information-centric approach to the problem presents a promising pathway for tackling these emerging complex legal issues.

View on arXiv
@article{franceschelli2025_2407.13493,
  title={ Training Foundation Models as Data Compression: On Information, Model Weights and Copyright Law },
  author={ Giorgio Franceschelli and Claudia Cevenini and Mirco Musolesi },
  journal={arXiv preprint arXiv:2407.13493},
  year={ 2025 }
}
Comments on this paper