ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.05322
  4. Cited By
Teaching-Assistant-in-the-Loop: Improving Knowledge Distillation from
  Imperfect Teacher Models in Low-Budget Scenarios

Teaching-Assistant-in-the-Loop: Improving Knowledge Distillation from Imperfect Teacher Models in Low-Budget Scenarios

Annual Meeting of the Association for Computational Linguistics (ACL), 2024
8 June 2024
Yuhang Zhou
Wei Ai
ArXiv (abs)PDFHTML

Papers citing "Teaching-Assistant-in-the-Loop: Improving Knowledge Distillation from Imperfect Teacher Models in Low-Budget Scenarios"

6 / 6 papers shown
Mitigating Spurious Correlations Between Question and Answer via Chain-of-Thought Correctness Perception Distillation
Mitigating Spurious Correlations Between Question and Answer via Chain-of-Thought Correctness Perception Distillation
Hongyan Xie
Yitong Yao
Yikun Ban
Zixuan Huang
Deqing Wang
Zhenhe Wu
Haoxiang Su
Chao Wang
Shuangyong Song
LRM
208
3
0
06 Sep 2025
Learning from Diverse Reasoning Paths with Routing and Collaboration
Learning from Diverse Reasoning Paths with Routing and Collaboration
Zhenyu Lei
Zhen Tan
Song Wang
Yaochen Zhu
Zihan Chen
Yushun Dong
Jundong Li
LRM
200
6
0
23 Aug 2025
Semantically-Aware Rewards for Open-Ended R1 Training in Free-Form Generation
Semantically-Aware Rewards for Open-Ended R1 Training in Free-Form Generation
Zongxia Li
Yapei Chang
Yuhang Zhou
Xiyang Wu
Zichao Liang
Yoo Yeon Sung
Jordan L. Boyd-Graber
238
6
0
18 Jun 2025
MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEs
MergeME: Model Merging Techniques for Homogeneous and Heterogeneous MoEsNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Yuhang Zhou
Giannis Karamanolakis
Victor Soto
Anna Rumshisky
Mayank Kulkarni
Furong Huang
Wei Ai
Jianhua Lu
MoMe
477
6
0
03 Feb 2025
Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in
  Sequence-Level Knowledge Distillation
Multi-Stage Balanced Distillation: Addressing Long-Tail Challenges in Sequence-Level Knowledge DistillationConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Yuhang Zhou
Jing Zhu
Paiheng Xu
Xiaoyu Liu
Xiyao Wang
Danai Koutra
Wei Ai
Furong Huang
288
6
0
19 Jun 2024
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement
Xiyao Wang
Jiuhai Chen
Zhaoyang Wang
Yuhang Zhou
Yiyang Zhou
...
Wanrong Zhu
Tom Goldstein
Parminder Bhatia
Furong Huang
Cao Xiao
464
64
0
24 May 2024
1