Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.02031
Cited By
A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training
3 May 2023
Nitay Calderon
Subhabrata Mukherjee
Roi Reichart
Amir Kantor
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training"
5 / 5 papers shown
Title
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
29
0
0
27 Jan 2025
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pretraining of BabyLM
Zhewen Shen
Aditya Joshi
Ruey-Cheng Chen
CLL
31
2
0
17 Jun 2024
Relating Neural Text Degeneration to Exposure Bias
Ting-Rui Chiang
Yun-Nung Chen
37
16
0
17 Sep 2021
Fixing exposure bias with imitation learning needs powerful oracles
L. Hormann
Artem Sokolov
21
3
0
09 Sep 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin P. Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei-ping Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
238
254
0
02 Feb 2021
1