ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.02031
  4. Cited By
A Systematic Study of Knowledge Distillation for Natural Language
  Generation with Pseudo-Target Training

A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training

3 May 2023
Nitay Calderon
Subhabrata Mukherjee
Roi Reichart
Amir Kantor
ArXivPDFHTML

Papers citing "A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training"

5 / 5 papers shown
Title
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
29
0
0
27 Jan 2025
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pretraining of BabyLM
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pretraining of BabyLM
Zhewen Shen
Aditya Joshi
Ruey-Cheng Chen
CLL
31
2
0
17 Jun 2024
Relating Neural Text Degeneration to Exposure Bias
Relating Neural Text Degeneration to Exposure Bias
Ting-Rui Chiang
Yun-Nung Chen
37
16
0
17 Sep 2021
Fixing exposure bias with imitation learning needs powerful oracles
Fixing exposure bias with imitation learning needs powerful oracles
L. Hormann
Artem Sokolov
21
3
0
09 Sep 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and
  Metrics
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin P. Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei-ping Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
238
254
0
02 Feb 2021
1