ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.02877
  4. Cited By
Distill, Adapt, Distill: Training Small, In-Domain Models for Neural
  Machine Translation

Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation

5 March 2020
Mitchell A. Gordon
Kevin Duh
    CLL
    VLM
ArXivPDFHTML

Papers citing "Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation"

3 / 3 papers shown
Title
Fast Vocabulary Transfer for Language Model Compression
Fast Vocabulary Transfer for Language Model Compression
Leonidas Gee
Andrea Zugarini
Leonardo Rigutini
Paolo Torroni
17
26
0
15 Feb 2024
Stolen Subwords: Importance of Vocabularies for Machine Translation
  Model Stealing
Stolen Subwords: Importance of Vocabularies for Machine Translation Model Stealing
Vilém Zouhar
AAML
30
0
0
29 Jan 2024
Datasheet for the Pile
Datasheet for the Pile
Stella Biderman
Kieran Bicheno
Leo Gao
31
35
0
13 Jan 2022
1