ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.12427
  4. Cited By
Hard Gate Knowledge Distillation -- Leverage Calibration for Robust and
  Reliable Language Model

Hard Gate Knowledge Distillation -- Leverage Calibration for Robust and Reliable Language Model

22 October 2022
Dongkyu Lee
Zhiliang Tian
Ying Zhao
Ka Chun Cheung
N. Zhang
ArXivPDFHTML

Papers citing "Hard Gate Knowledge Distillation -- Leverage Calibration for Robust and Reliable Language Model"

1 / 1 papers shown
Title
Learning to Maximize Mutual Information for Chain-of-Thought
  Distillation
Learning to Maximize Mutual Information for Chain-of-Thought Distillation
Xin Chen
Hanxian Huang
Yanjun Gao
Yi Wang
Jishen Zhao
Ke Ding
30
11
0
05 Mar 2024
1