ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10957
  4. Cited By
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression
  of Pre-Trained Transformers

MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers

25 February 2020
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
    VLM
ArXivPDFHTML

Papers citing "MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers"

44 / 144 papers shown
Title
A Closer Look at Self-Supervised Lightweight Vision Transformers
A Closer Look at Self-Supervised Lightweight Vision Transformers
Shaoru Wang
Jin Gao
Zeming Li
Jian-jun Sun
Weiming Hu
ViT
62
41
0
28 May 2022
DistilCamemBERT: a distillation of the French model CamemBERT
DistilCamemBERT: a distillation of the French model CamemBERT
Cyrile Delestre
Abibatou Amar
22
5
0
23 May 2022
A Fast Attention Network for Joint Intent Detection and Slot Filling on
  Edge Devices
A Fast Attention Network for Joint Intent Detection and Slot Filling on Edge Devices
Liang Huang
Senjie Liang
Feiyang Ye
Nan Gao
48
3
0
16 May 2022
Chemical transformer compression for accelerating both training and
  inference of molecular modeling
Chemical transformer compression for accelerating both training and inference of molecular modeling
Yi Yu
K. Börjesson
19
0
0
16 May 2022
Natural Language Inference with Self-Attention for Veracity Assessment
  of Pandemic Claims
Natural Language Inference with Self-Attention for Veracity Assessment of Pandemic Claims
Miguel Arana Catania
E. Kochkina
A. Zubiaga
M. Liakata
Rob Procter
Yulan He
23
10
0
05 May 2022
Generalized Knowledge Distillation via Relationship Matching
Generalized Knowledge Distillation via Relationship Matching
Han-Jia Ye
Su Lu
De-Chuan Zhan
FedML
22
20
0
04 May 2022
Attention Mechanism in Neural Networks: Where it Comes and Where it Goes
Attention Mechanism in Neural Networks: Where it Comes and Where it Goes
Derya Soydaner
3DV
31
149
0
27 Apr 2022
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided
  Adaptation
MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation
Simiao Zuo
Qingru Zhang
Chen Liang
Pengcheng He
T. Zhao
Weizhu Chen
MoE
14
38
0
15 Apr 2022
MiniViT: Compressing Vision Transformers with Weight Multiplexing
MiniViT: Compressing Vision Transformers with Weight Multiplexing
Jinnian Zhang
Houwen Peng
Kan Wu
Mengchen Liu
Bin Xiao
Jianlong Fu
Lu Yuan
ViT
13
123
0
14 Apr 2022
Pyramid-BERT: Reducing Complexity via Successive Core-set based Token
  Selection
Pyramid-BERT: Reducing Complexity via Successive Core-set based Token Selection
Xin Huang
A. Khetan
Rene Bidart
Zohar S. Karnin
17
14
0
27 Mar 2022
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text
  Retrieval
LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval
Canwen Xu
Daya Guo
Nan Duan
Julian McAuley
RALM
VLM
8
45
0
11 Mar 2022
Text and Code Embeddings by Contrastive Pre-Training
Text and Code Embeddings by Contrastive Pre-Training
Arvind Neelakantan
Tao Xu
Raul Puri
Alec Radford
Jesse Michael Han
...
Tabarak Khan
Toki Sherbakov
Joanne Jang
Peter Welinder
Lilian Weng
SSL
AI4TS
213
420
0
24 Jan 2022
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to
  Power Next-Generation AI Scale
DeepSpeed-MoE: Advancing Mixture-of-Experts Inference and Training to Power Next-Generation AI Scale
Samyam Rajbhandari
Conglong Li
Z. Yao
Minjia Zhang
Reza Yazdani Aminabadi
A. A. Awan
Jeff Rasley
Yuxiong He
21
282
0
14 Jan 2022
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training
  for Language Understanding and Generation
ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation
Shuohuan Wang
Yu Sun
Yang Xiang
Zhihua Wu
Siyu Ding
...
Tian Wu
Wei Zeng
Ge Li
Wen Gao
Haifeng Wang
ELM
31
78
0
23 Dec 2021
Evaluating Pretrained Transformer Models for Entity Linking in
  Task-Oriented Dialog
Evaluating Pretrained Transformer Models for Entity Linking in Task-Oriented Dialog
Sai Muralidhar Jayanthi
Varsha Embar
Karthik Raghunathan
13
6
0
15 Dec 2021
ColBERTv2: Effective and Efficient Retrieval via Lightweight Late
  Interaction
ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction
Keshav Santhanam
Omar Khattab
Jon Saad-Falcon
Christopher Potts
Matei A. Zaharia
14
380
0
02 Dec 2021
Self-slimmed Vision Transformer
Self-slimmed Vision Transformer
Zhuofan Zong
Kunchang Li
Guanglu Song
Yali Wang
Yu Qiao
B. Leng
Yu Liu
ViT
16
30
0
24 Nov 2021
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment
Weixin Xu
Zipeng Feng
Shuangkang Fang
Song Yuan
Yi Yang
Shuchang Zhou
MQ
14
1
0
01 Nov 2021
FacTeR-Check: Semi-automated fact-checking through Semantic Similarity
  and Natural Language Inference
FacTeR-Check: Semi-automated fact-checking through Semantic Similarity and Natural Language Inference
Alejandro Martín
Javier Huertas-Tato
Álvaro Huertas-García
Guillermo Villar-Rodríguez
David Camacho
HILM
17
31
0
27 Oct 2021
Sparse Distillation: Speeding Up Text Classification by Using Bigger
  Student Models
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models
Qinyuan Ye
Madian Khabsa
M. Lewis
Sinong Wang
Xiang Ren
Aaron Jaech
29
5
0
16 Oct 2021
Kronecker Decomposition for GPT Compression
Kronecker Decomposition for GPT Compression
Ali Edalati
Marzieh S. Tahaei
Ahmad Rashid
V. Nia
J. Clark
Mehdi Rezagholizadeh
34
33
0
15 Oct 2021
Knowledge Distillation with Noisy Labels for Natural Language
  Understanding
Knowledge Distillation with Noisy Labels for Natural Language Understanding
Shivendra Bhardwaj
Abbas Ghaddar
Ahmad Rashid
Khalil Bibi
Cheng-huan Li
A. Ghodsi
Philippe Langlais
Mehdi Rezagholizadeh
19
1
0
21 Sep 2021
EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up
  Knowledge Distillation
EfficientBERT: Progressively Searching Multilayer Perceptron via Warm-up Knowledge Distillation
Chenhe Dong
Guangrun Wang
Hang Xu
Jiefeng Peng
Xiaozhe Ren
Xiaodan Liang
16
28
0
15 Sep 2021
Exploring the Promises of Transformer-Based LMs for the Representation
  of Normative Claims in the Legal Domain
Exploring the Promises of Transformer-Based LMs for the Representation of Normative Claims in the Legal Domain
Reto Gubelmann
Peter Hongler
Siegfried Handschuh
AILaw
8
0
0
25 Aug 2021
FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks
FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks
Sheng-Chun Kao
Suvinay Subramanian
Gaurav Agrawal
Amir Yazdanbakhsh
T. Krishna
30
57
0
13 Jul 2021
Reinforcement Learning for Abstractive Question Summarization with
  Question-aware Semantic Rewards
Reinforcement Learning for Abstractive Question Summarization with Question-aware Semantic Rewards
S. Yadav
D. Gupta
Asma Ben Abacha
Dina Demner-Fushman
OffRL
8
33
0
01 Jul 2021
Pre-Trained Models: Past, Present and Future
Pre-Trained Models: Past, Present and Future
Xu Han
Zhengyan Zhang
Ning Ding
Yuxian Gu
Xiao Liu
...
Jie Tang
Ji-Rong Wen
Jinhui Yuan
Wayne Xin Zhao
Jun Zhu
AIFin
MQ
AI4MH
24
807
0
14 Jun 2021
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained
  Transformer Compression
ERNIE-Tiny : A Progressive Distillation Framework for Pretrained Transformer Compression
Weiyue Su
Xuyi Chen
Shi Feng
Jiaxiang Liu
Weixin Liu
Yu Sun
Hao Tian
Hua-Hong Wu
Haifeng Wang
21
13
0
04 Jun 2021
Anchor-based Plain Net for Mobile Image Super-Resolution
Anchor-based Plain Net for Mobile Image Super-Resolution
Zongcai Du
Jie Liu
Jie Tang
Gangshan Wu
SupR
MQ
28
52
0
20 May 2021
Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm
Rethinking Network Pruning -- under the Pre-train and Fine-tune Paradigm
Dongkuan Xu
Ian En-Hsu Yen
Jinxi Zhao
Zhibin Xiao
VLM
AAML
20
55
0
18 Apr 2021
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information
  Retrieval Models
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Nandan Thakur
Nils Reimers
Andreas Rucklé
Abhishek Srivastava
Iryna Gurevych
VLM
229
961
0
17 Apr 2021
Compressing Visual-linguistic Model via Knowledge Distillation
Compressing Visual-linguistic Model via Knowledge Distillation
Zhiyuan Fang
Jianfeng Wang
Xiaowei Hu
Lijuan Wang
Yezhou Yang
Zicheng Liu
VLM
23
96
0
05 Apr 2021
Training Large-Scale News Recommenders with Pretrained Language Models
  in the Loop
Training Large-Scale News Recommenders with Pretrained Language Models in the Loop
Shitao Xiao
Zheng Liu
Yingxia Shao
Tao Di
Xing Xie
VLM
AIFin
119
41
0
18 Feb 2021
Optimizing Inference Performance of Transformers on CPUs
Optimizing Inference Performance of Transformers on CPUs
D. Dice
Alex Kogan
19
15
0
12 Feb 2021
Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Learning to Augment for Data-Scarce Domain BERT Knowledge Distillation
Lingyun Feng
Minghui Qiu
Yaliang Li
Haitao Zheng
Ying Shen
31
10
0
20 Jan 2021
Intent Classification and Slot Filling for Privacy Policies
Intent Classification and Slot Filling for Privacy Policies
Wasi Uddin Ahmad
Jianfeng Chi
Tu Le
Thomas B. Norton
Yuan Tian
Kai-Wei Chang
11
23
0
01 Jan 2021
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for
  Natural Language Understanding
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for Natural Language Understanding
Hao Fu
Shaojun Zhou
Qihong Yang
Junjie Tang
Guiquan Liu
Kaikui Liu
Xiaolong Li
27
56
0
14 Dec 2020
Pre-trained Summarization Distillation
Pre-trained Summarization Distillation
Sam Shleifer
Alexander M. Rush
11
98
0
24 Oct 2020
Knowledge Distillation: A Survey
Knowledge Distillation: A Survey
Jianping Gou
B. Yu
Stephen J. Maybank
Dacheng Tao
VLM
17
2,822
0
09 Jun 2020
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Canwen Xu
Wangchunshu Zhou
Tao Ge
Furu Wei
Ming Zhou
221
197
0
07 Feb 2020
MLQA: Evaluating Cross-lingual Extractive Question Answering
MLQA: Evaluating Cross-lingual Extractive Question Answering
Patrick Lewis
Barlas Oğuz
Ruty Rinott
Sebastian Riedel
Holger Schwenk
ELM
242
490
0
16 Oct 2019
Text Summarization with Pretrained Encoders
Text Summarization with Pretrained Encoders
Yang Liu
Mirella Lapata
MILM
254
1,428
0
22 Aug 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,943
0
20 Apr 2018
Google's Neural Machine Translation System: Bridging the Gap between
  Human and Machine Translation
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
Yonghui Wu
M. Schuster
Z. Chen
Quoc V. Le
Mohammad Norouzi
...
Alex Rudnick
Oriol Vinyals
G. Corrado
Macduff Hughes
J. Dean
AIMat
716
6,724
0
26 Sep 2016
Previous
123