Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2003.02877
Cited By
Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation
5 March 2020
Mitchell A. Gordon
Kevin Duh
CLL
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation"
3 / 3 papers shown
Title
Fast Vocabulary Transfer for Language Model Compression
Leonidas Gee
Andrea Zugarini
Leonardo Rigutini
Paolo Torroni
17
26
0
15 Feb 2024
Stolen Subwords: Importance of Vocabularies for Machine Translation Model Stealing
Vilém Zouhar
AAML
30
0
0
29 Jan 2024
Datasheet for the Pile
Stella Biderman
Kieran Bicheno
Leo Gao
31
35
0
13 Jan 2022
1