ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.13193
52
1

Private Text Generation by Seeding Large Language Model Prompts

20 February 2025
Supriya Nagesh
Justin Y. Chen
Nina Mishra
Tal Wagner
    SyDa
    SILM
ArXivPDFHTML
Abstract

We explore how private synthetic text can be generated by suitably prompting a large language model (LLM). This addresses a challenge for organizations like hospitals, which hold sensitive text data like patient medical records, and wish to share it in order to train machine learning models for medical tasks, while preserving patient privacy. Methods that rely on training or finetuning a model may be out of reach, either due to API limits of third-party LLMs, or due to ethical and legal prohibitions on sharing the private data with the LLM itself.We propose Differentially Private Keyphrase Prompt Seeding (DP-KPS), a method that generates a private synthetic text corpus from a sensitive input corpus, by accessing an LLM only through privatized prompts. It is based on seeding the prompts with private samples from a distribution over phrase embeddings, thus capturing the input corpus while achieving requisite output diversity and maintaining differential privacy. We evaluate DP-KPS on downstream ML text classification tasks, and show that the corpora it generates preserve much of the predictive power of the original ones. Our findings offer hope that institutions can reap ML insights by privately sharing data with simple prompts and little compute.

View on arXiv
@article{nagesh2025_2502.13193,
  title={ Private Text Generation by Seeding Large Language Model Prompts },
  author={ Supriya Nagesh and Justin Y. Chen and Nina Mishra and Tal Wagner },
  journal={arXiv preprint arXiv:2502.13193},
  year={ 2025 }
}
Comments on this paper