ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.06585
127
66

Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models

11 December 2023
Avi Singh
John D. Co-Reyes
Rishabh Agarwal
Ankesh Anand
Piyush Patil
Xavier Garcia
Peter J. Liu
James Harrison
Jaehoon Lee
Kelvin Xu
Aaron T Parisi
Abhishek Kumar
A. Alemi
Alex Rizkowsky
Azade Nova
Ben Adlam
Bernd Bohnet
Gamaleldin F. Elsayed
Hanie Sedghi
Igor Mordatch
Isabelle Simpson
Izzeddin Gur
Jasper Snoek
Jeffrey Pennington
Jiri Hron
Kathleen Kenealy
Kevin Swersky
Kshiteej Mahajan
Laura J. Culp
Lechao Xiao
Maxwell Bileschi
Noah Constant
Roman Novak
Rosanne Liu
T. Warkentin
Yundi Qian
Yamini Bansal
Ethan Dyer
Behnam Neyshabur
Jascha Narain Sohl-Dickstein
Noah Fiedel
    ALM
    LRM
    ReLM
    SyDa
ArXivPDFHTML
Abstract

Fine-tuning language models~(LMs) on human-generated data remains a prevalent practice. However, the performance of such models is often limited by the quantity and diversity of high-quality human data. In this paper, we explore whether we can go beyond human data on tasks where we have access to scalar feedback, for example, on math problems where one can verify correctness. To do so, we investigate a simple self-training method based on expectation-maximization, which we call ReSTEM^{EM}EM, where we (1) generate samples from the model and filter them using binary feedback, (2) fine-tune the model on these samples, and (3) repeat this process a few times. Testing on advanced MATH reasoning and APPS coding benchmarks using PaLM-2 models, we find that ReSTEM^{EM}EM scales favorably with model size and significantly surpasses fine-tuning only on human data. Overall, our findings suggest self-training with feedback can substantially reduce dependence on human-generated data.

View on arXiv
Comments on this paper