ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07558
17
0

Direct Density Ratio Optimization: A Statistically Consistent Approach to Aligning Large Language Models

12 May 2025
Rei Higuchi
Taiji Suzuki
ArXivPDFHTML
Abstract

Aligning large language models (LLMs) with human preferences is crucial for safe deployment, yet existing methods assume specific preference models like Bradley-Terry model. This assumption leads to statistical inconsistency, where more data doesn't guarantee convergence to true human preferences. To address this critical gap, we introduce a novel alignment method Direct Density Ratio Optimization (DDRO). DDRO directly estimates the density ratio between preferred and unpreferred output distributions, circumventing the need for explicit human preference modeling. We theoretically prove that DDRO is statistically consistent, ensuring convergence to the true preferred distribution as the data size grows, regardless of the underlying preference structure. Experiments demonstrate that DDRO achieves superior performance compared to existing methods on many major benchmarks. DDRO unlocks the potential for truly data-driven alignment, paving the way for more reliable and human-aligned LLMs.

View on arXiv
@article{higuchi2025_2505.07558,
  title={ Direct Density Ratio Optimization: A Statistically Consistent Approach to Aligning Large Language Models },
  author={ Rei Higuchi and Taiji Suzuki },
  journal={arXiv preprint arXiv:2505.07558},
  year={ 2025 }
}
Comments on this paper