ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.04109
166
0
v1v2v3 (latest)

Self-Consistency Preference Optimization

6 November 2024
Archiki Prasad
Weizhe Yuan
Richard Yuanzhe Pang
Jing Xu
Maryam Fazel-Zarandi
Joey Tianyi Zhou
Sainbayar Sukhbaatar
Jason Weston
Jane Dwivedi-Yu
    LRM
ArXiv (abs)PDFHTMLHuggingFace (19 upvotes)
Main:7 Pages
3 Figures
Bibliography:4 Pages
10 Tables
Appendix:4 Pages
Abstract

Self-alignment, whereby models learn to improve themselves without human annotation, is a rapidly growing research area. However, existing techniques often fail to improve complex reasoning tasks due to the difficulty of assigning correct rewards. An orthogonal approach that is known to improve correctness is self-consistency, a method applied at inference time based on multiple sampling in order to find the most consistent answer. In this work, we extend the self-consistency concept to help train models. We thus introduce self-consistency preference optimization (ScPO), which iteratively trains consistent answers to be preferred over inconsistent ones on unsupervised new problems. We show ScPO leads to large improvements over conventional reward model training on reasoning tasks such as GSM8K and MATH, closing the gap with supervised training with gold answers or preferences, and that combining ScPO with standard supervised learning improves results even further. On ZebraLogic, ScPO finetunes Llama-3 8B to be superior to Llama-3 70B, Gemma-2 27B, and Claude-3 Haiku.

View on arXiv
Comments on this paper