ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21395
62
0

SquareχχχPO: Differentially Private and Robust χ2χ^2χ2-Preference Optimization in Offline Direct Alignment

27 May 2025
Xingyu Zhou
Yulian Wu
Wenqian Weng
Francesco Orabona
ArXiv (abs)PDFHTML
Main:9 Pages
Bibliography:7 Pages
1 Tables
Appendix:9 Pages
Abstract

In this paper, we theoretically study the offline alignment of language models with human preference feedback, under both preference label corruption and privacy protections. To this end, we propose Squareχ\chiχPO, a simple one-line change to χ\chiχPO where the standard log-loss is replaced by a new square loss over probability. Thanks to the inherent properties of this new loss, we have advanced the state-of-the-art of differentially private and robust offline direct alignment. Specifically, for the local model of label privacy, Squareχ\chiχPO is the first algorithm that attains an optimal rate based on single-policy concentrability even with general function approximations. It also gives the first result under the central model of privacy protection over both prompts (responses) and labels. On the robustness side against Huber label corruption, Squareχ\chiχPO is the first alignment method that has a meaningful theoretical guarantee under general function approximations. More importantly, Squareχ\chiχPO can address privacy protection and corruption simultaneously, where an interesting separation is observed, implying that the order of privacy and corruption matters. Furthermore, we show that Squareχ\chiχPO can also be easily extended to handle the scenario of the general preference model with state-of-the-art guarantees under corruption and privacy. Last but not least, all of our theoretical guarantees enjoy a unified analysis, building upon a new result on the generalization error bounds of least-square regression under corruption and privacy constraints, which we believe is of independent interest to the community.

View on arXiv
@article{zhou2025_2505.21395,
  title={ Square$χ$PO: Differentially Private and Robust $χ^2$-Preference Optimization in Offline Direct Alignment },
  author={ Xingyu Zhou and Yulian Wu and Wenqian Weng and Francesco Orabona },
  journal={arXiv preprint arXiv:2505.21395},
  year={ 2025 }
}
Comments on this paper