ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.10456
  4. Cited By
Don't Throw Away Data: Better Sequence Knowledge Distillation

Don't Throw Away Data: Better Sequence Knowledge Distillation

15 July 2024
Jun Wang
Eleftheria Briakou
Hamid Dadkhahi
Rishabh Agarwal
Colin Cherry
Trevor Cohn
ArXivPDFHTML

Papers citing "Don't Throw Away Data: Better Sequence Knowledge Distillation"

4 / 4 papers shown
Title
Memorization Inheritance in Sequence-Level Knowledge Distillation for Neural Machine Translation
Memorization Inheritance in Sequence-Level Knowledge Distillation for Neural Machine Translation
Verna Dankers
Vikas Raunak
VLM
56
0
0
03 Feb 2025
Better Instruction-Following Through Minimum Bayes Risk
Better Instruction-Following Through Minimum Bayes Risk
Ian Wu
Patrick Fernandes
Amanda Bertsch
Seungone Kim
Sina Pakazad
Graham Neubig
48
9
0
03 Oct 2024
Faster Minimum Bayes Risk Decoding with Confidence-based Pruning
Faster Minimum Bayes Risk Decoding with Confidence-based Pruning
Julius Cheng
Andreas Vlachos
42
21
0
25 Nov 2023
MBR and QE Finetuning: Training-time Distillation of the Best and Most
  Expensive Decoding Methods
MBR and QE Finetuning: Training-time Distillation of the Best and Most Expensive Decoding Methods
M. Finkelstein
Subhajit Naskar
Mehdi Mirzazadeh
Apurva Shah
Markus Freitag
45
26
0
19 Sep 2023
1