ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.07772
  4. Cited By
Alignment with Preference Optimization Is All You Need for LLM Safety

Alignment with Preference Optimization Is All You Need for LLM Safety

12 September 2024
Réda Alami
Ali Khalifa Almansoori
Ahmed Alzubaidi
M. Seddik
Mugariya Farooq
Hakim Hacid
ArXivPDFHTML

Papers citing "Alignment with Preference Optimization Is All You Need for LLM Safety"

1 / 1 papers shown
Title
Noise Contrastive Alignment of Language Models with Explicit Rewards
Noise Contrastive Alignment of Language Models with Explicit Rewards
Huayu Chen
Guande He
Lifan Yuan
Ganqu Cui
Hang Su
Jun Zhu
55
37
0
08 Feb 2024
1