11
0

Improved Algorithms for Differentially Private Language Model Alignment

Abstract

Language model alignment is crucial for ensuring that large language models (LLMs) align with human preferences, yet it often involves sensitive user data, raising significant privacy concerns. While prior work has integrated differential privacy (DP) with alignment techniques, their performance remains limited. In this paper, we propose novel algorithms for privacy-preserving alignment and rigorously analyze their effectiveness across varying privacy budgets and models. Our framework can be deployed on two celebrated alignment techniques, namely direct preference optimization (DPO) and reinforcement learning from human feedback (RLHF). Through systematic experiments on large-scale language models, we demonstrate that our approach achieves state-of-the-art performance. Notably, one of our algorithms, DP-AdamW, combined with DPO, surpasses existing methods, improving alignment quality by up to 15% under moderate privacy budgets ({\epsilon}=2-5). We further investigate the interplay between privacy guarantees, alignment efficacy, and computational demands, providing practical guidelines for optimizing these trade-offs.

View on arXiv
@article{chen2025_2505.08849,
  title={ Improved Algorithms for Differentially Private Language Model Alignment },
  author={ Keyu Chen and Hao Tang and Qinglin Liu and Yizhao Xu },
  journal={arXiv preprint arXiv:2505.08849},
  year={ 2025 }
}
Comments on this paper