ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03778
28
1

Augmenting Anonymized Data with AI: Exploring the Feasibility and Limitations of Large Language Models in Data Enrichment

3 April 2025
Stefano Cirillo
Domenico Desiato
Giuseppe Polese
Monica Maria Lucia Sebillo
Giandomenico Solimando
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have demonstrated advanced capabilities in both text generation and comprehension, and their application to data archives might facilitate the privatization of sensitive information about the data subjects. In fact, the information contained in data often includes sensitive and personally identifiable details. This data, if not safeguarded, may bring privacy risks in terms of both disclosure and identification. Furthermore, the application of anonymisation techniques, such as k-anonymity, can lead to a significant reduction in the amount of data within data sources, which may reduce the efficacy of predictive processes. In our study, we investigate the capabilities offered by LLMs to enrich anonymized data sources without affecting their anonymity. To this end, we designed new ad-hoc prompt template engineering strategies to perform anonymized Data Augmentation and assess the effectiveness of LLM-based approaches in providing anonymized data. To validate the anonymization guarantees provided by LLMs, we exploited the pyCanon library, designed to assess the values of the parameters associated with the most common privacy-preserving techniques via anonymization. Our experiments conducted on real-world datasets demonstrate that LLMs yield promising results for this goal.

View on arXiv
@article{cirillo2025_2504.03778,
  title={ Augmenting Anonymized Data with AI: Exploring the Feasibility and Limitations of Large Language Models in Data Enrichment },
  author={ Stefano Cirillo and Domenico Desiato and Giuseppe Polese and Monica Maria Lucia Sebillo and Giandomenico Solimando },
  journal={arXiv preprint arXiv:2504.03778},
  year={ 2025 }
}
Comments on this paper