ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19557
66
0

Distill Not Only Data but Also Rewards: Can Smaller Language Models Surpass Larger Ones?

26 February 2025
Yudi Zhang
Lu Wang
Meng Fang
Yali Du
Chenghua Huang
Jun Wang
Qingwei Lin
Mykola Pechenizkiy
Dongmei Zhang
Saravan Rajmohan
Qi Zhang
    ALM
ArXivPDFHTML
Abstract

Distilling large language models (LLMs) typically involves transferring the teacher model's responses through supervised fine-tuning (SFT). However, this approach neglects the potential to distill both data (output content) and reward signals (quality evaluations). Extracting reliable reward signals directly from teacher models is challenging, as LLMs are optimized for generation rather than evaluation, often resulting in biased or inconsistent assessments. To address this limitation, we propose a novel distillation pipeline that transfers both responses and rewards. Our method generates pseudo-rewards through a self-supervised mechanism that leverages the inherent structure of both teacher and student responses, enabling reward learning without explicit external evaluation. The reward model subsequently guides reinforcement learning (RL), allowing iterative refinement of the student model after an SFT warm-up phase. Experiments on GSM8K and MMLU-PRO demonstrate that our method consistently outperforms traditional SFT-based approaches, enabling student models to surpass the performance of their teachers. This work highlights the potential for scalable, efficient distillation through structured self-supervised reward learning, reducing dependence on external reward supervision.

View on arXiv
@article{zhang2025_2502.19557,
  title={ Distill Not Only Data but Also Rewards: Can Smaller Language Models Surpass Larger Ones? },
  author={ Yudi Zhang and Lu Wang and Meng Fang and Yali Du and Chenghua Huang and Jun Wang and Qingwei Lin and Mykola Pechenizkiy and Dongmei Zhang and Saravan Rajmohan and Qi Zhang },
  journal={arXiv preprint arXiv:2502.19557},
  year={ 2025 }
}
Comments on this paper