37
0

Synthetic Text Generation for Training Large Language Models via Gradient Matching

Abstract

Synthetic data has the potential to improve the performance, training efficiency, and privacy of real training examples. Nevertheless, existing approaches for synthetic text generation are mostly heuristics and cannot generate human-readable text without compromising the privacy of real data or provide performance guarantees for training Large Language Models (LLMs). In this work, we propose the first theoretically rigorous approach for generating synthetic human-readable text that guarantees the convergence and performance of LLMs during fine-tuning on a target task. To do so, we leverage Alternating Direction Method of Multipliers (ADMM) that iteratively optimizes the embeddings of synthetic examples to match the gradient of the target training or validation data, and maps them to a sequence of text tokens with low perplexity. In doing so, the generated synthetic text can guarantee convergence of the model to a close neighborhood of the solution obtained by fine-tuning on real data. Experiments on various classification tasks confirm the effectiveness of our proposed approach.

View on arXiv
@article{nguyen2025_2502.17607,
  title={ Synthetic Text Generation for Training Large Language Models via Gradient Matching },
  author={ Dang Nguyen and Zeman Li and Mohammadhossein Bateni and Vahab Mirrokni and Meisam Razaviyayn and Baharan Mirzasoleiman },
  journal={arXiv preprint arXiv:2502.17607},
  year={ 2025 }
}
Comments on this paper