ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.13183
  4. Cited By
$\textit{SKIntern}$: Internalizing Symbolic Knowledge for Distilling
  Better CoT Capabilities into Small Language Models

SKIntern\textit{SKIntern}SKIntern: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models

20 September 2024
Huanxuan Liao
Shizhu He
Yupu Hao
Xiang Li
Yuanzhe Zhang
Kang Liu
Jun Zhao
    LRM
ArXivPDFHTML

Papers citing "$\textit{SKIntern}$: Internalizing Symbolic Knowledge for Distilling Better CoT Capabilities into Small Language Models"

3 / 3 papers shown
Title
Efficient Reasoning for LLMs through Speculative Chain-of-Thought
Efficient Reasoning for LLMs through Speculative Chain-of-Thought
Jikai Wang
J. Li
Lijun Wu
M. Zhang
LLMAG
LRM
59
1
0
27 Apr 2025
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Yang Sui
Yu-Neng Chuang
Guanchu Wang
Jiamu Zhang
Tianyi Zhang
...
Hongyi Liu
Andrew Wen
Shaochen
Zhong
Hanjie Chen
OffRL
ReLM
LRM
62
21
0
20 Mar 2025
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning Tasks
Neural-Symbolic Collaborative Distillation: Advancing Small Language Models for Complex Reasoning Tasks
Huanxuan Liao
Shizhu He
Yao Xu
Yuanzhe Zhang
Kang Liu
Jun Zhao
LRM
46
3
0
20 Sep 2024
1