ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.12832
  4. Cited By
Turning Dust into Gold: Distilling Complex Reasoning Capabilities from
  LLMs by Leveraging Negative Data

Turning Dust into Gold: Distilling Complex Reasoning Capabilities from LLMs by Leveraging Negative Data

20 December 2023
Yiwei Li
Peiwen Yuan
Shaoxiong Feng
Boyuan Pan
Bin Sun
Xinglin Wang
Heda Wang
Kan Li
    LRM
ArXivPDFHTML

Papers citing "Turning Dust into Gold: Distilling Complex Reasoning Capabilities from LLMs by Leveraging Negative Data"

11 / 11 papers shown
Title
Efficient Reasoning Models: A Survey
Efficient Reasoning Models: A Survey
Sicheng Feng
Gongfan Fang
Xinyin Ma
Xinchao Wang
ReLM
LRM
110
0
0
15 Apr 2025
Scientific Reasoning: Assessment of Multimodal Generative LLMs
Florian Dreyer
Ekaterina Kolos
Daria Matiash
ReLM
LRM
62
0
0
03 Mar 2025
ALinFiK: Learning to Approximate Linearized Future Influence Kernel for Scalable Third-Party LLM Data Valuation
ALinFiK: Learning to Approximate Linearized Future Influence Kernel for Scalable Third-Party LLM Data Valuation
Yanzhou Pan
Huawei Lin
Yide Ran
Jiamin Chen
Xiaodong Yu
Weijie Zhao
Denghui Zhang
Zhaozhuo Xu
40
0
0
02 Mar 2025
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning Tasks
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning Tasks
Huanxuan Liao
Shizhu He
Yao Xu
Yuanzhe Zhang
Kang Liu
Jun Zhao
LRM
53
3
0
20 Sep 2024
Retrieved In-Context Principles from Previous Mistakes
Retrieved In-Context Principles from Previous Mistakes
Hao-Lun Sun
Yong-jia Jiang
Bo Wang
Yingyan Hou
Yan Zhang
Pengjun Xie
Fei Huang
50
1
0
08 Jul 2024
Distilling Reasoning Ability from Large Language Models with Adaptive
  Thinking
Distilling Reasoning Ability from Large Language Models with Adaptive Thinking
Xiao Chen
Sihang Zhou
K. Liang
Xinwang Liu
ReLM
LRM
27
2
0
14 Apr 2024
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
206
499
0
03 May 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
251
2,232
0
22 Mar 2023
Heterogeneous-Branch Collaborative Learning for Dialogue Generation
Heterogeneous-Branch Collaborative Learning for Dialogue Generation
Yiwei Li
Shaoxiong Feng
Bin Sun
Kan Li
9
3
0
21 Mar 2023
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
293
4,077
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,236
0
21 Mar 2022
1