ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.02031
  4. Cited By
A Systematic Study of Knowledge Distillation for Natural Language
  Generation with Pseudo-Target Training

A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training

3 May 2023
Nitay Calderon
Subhabrata Mukherjee
Roi Reichart
Amir Kantor
ArXivPDFHTML

Papers citing "A Systematic Study of Knowledge Distillation for Natural Language Generation with Pseudo-Target Training"

15 / 15 papers shown
Title
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
The Effect of Optimal Self-Distillation in Noisy Gaussian Mixture Model
Kaito Takanami
Takashi Takahashi
Ayaka Sakata
29
0
0
27 Jan 2025
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia
  Obfuscation in Transcribed Speech
DiDOTS: Knowledge Distillation from Large-Language-Models for Dementia Obfuscation in Transcribed Speech
Dominika Woszczyk
Soteris Demetriou
15
0
0
05 Oct 2024
Automatic Metrics in Natural Language Generation: A Survey of Current
  Evaluation Practices
Automatic Metrics in Natural Language Generation: A Survey of Current Evaluation Practices
Patrícia Schmidtová
Saad Mahamood
Simone Balloccu
Ondřej Dušek
Albert Gatt
Dimitra Gkatzia
David M. Howcroft
Ondřej Plátek
Adarsa Sivaprasad
37
0
0
17 Aug 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
24
10
0
27 Jul 2024
Survey on Knowledge Distillation for Large Language Models: Methods,
  Evaluation, and Application
Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application
Chuanpeng Yang
Wang Lu
Yao Zhu
Yidong Wang
Qian Chen
Chenlong Gao
Bingjie Yan
Yiqiang Chen
ALM
KELM
44
20
0
02 Jul 2024
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pretraining of BabyLM
BAMBINO-LM: (Bilingual-)Human-Inspired Continual Pretraining of BabyLM
Zhewen Shen
Aditya Joshi
Ruey-Cheng Chen
CLL
31
2
0
17 Jun 2024
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge
  Distillation
AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation
Zihao Tang
Zheqi Lv
Shengyu Zhang
Yifan Zhou
Xinyu Duan
Fei Wu
Kun Kuang
24
1
0
11 Mar 2024
PromptKD: Distilling Student-Friendly Knowledge for Generative Language
  Models via Prompt Tuning
PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning
Gyeongman Kim
Doohyuk Jang
Eunho Yang
VLM
24
3
0
20 Feb 2024
The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional
  Supporters for Queer Youth
The Colorful Future of LLMs: Evaluating and Improving LLMs as Emotional Supporters for Queer Youth
Shir Lissak
Nitay Calderon
Geva Shenkman
Yaakov Ophir
Eyal Fruchter
A. Klomek
Roi Reichart
AI4MH
23
11
0
19 Feb 2024
Faithful Explanations of Black-box NLP Models Using LLM-generated
  Counterfactuals
Faithful Explanations of Black-box NLP Models Using LLM-generated Counterfactuals
Y. Gat
Nitay Calderon
Amir Feder
Alexander Chapanin
Amit Sharma
Roi Reichart
16
28
0
01 Oct 2023
Prompt2Model: Generating Deployable Models from Natural Language
  Instructions
Prompt2Model: Generating Deployable Models from Natural Language Instructions
Vijay Viswanathan
Chenyang Zhao
Amanda Bertsch
Tongshuang Wu
Graham Neubig
22
36
0
23 Aug 2023
Measuring the Robustness of NLP Models to Domain Shifts
Measuring the Robustness of NLP Models to Domain Shifts
Nitay Calderon
Naveh Porat
Eyal Ben-David
Alexander Chapanin
Zorik Gekhman
Nadav Oved
Vitaly Shalumov
Roi Reichart
10
6
0
31 May 2023
Relating Neural Text Degeneration to Exposure Bias
Relating Neural Text Degeneration to Exposure Bias
Ting-Rui Chiang
Yun-Nung Chen
45
17
0
17 Sep 2021
Fixing exposure bias with imitation learning needs powerful oracles
Fixing exposure bias with imitation learning needs powerful oracles
L. Hormann
Artem Sokolov
23
3
0
09 Sep 2021
The GEM Benchmark: Natural Language Generation, its Evaluation and
  Metrics
The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics
Sebastian Gehrmann
Tosin P. Adewumi
Karmanya Aggarwal
Pawan Sasanka Ammanamanchi
Aremu Anuoluwapo
...
Nishant Subramani
Wei-ping Xu
Diyi Yang
Akhila Yerukola
Jiawei Zhou
VLM
238
254
0
02 Feb 2021
1