ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.10073
  4. Cited By
Depth-Adaptive Transformer

Depth-Adaptive Transformer

22 October 2019
Maha Elbayad
Jiatao Gu
Edouard Grave
Michael Auli
ArXivPDFHTML

Papers citing "Depth-Adaptive Transformer"

40 / 40 papers shown
Title
Image Recognition with Online Lightweight Vision Transformer: A Survey
Image Recognition with Online Lightweight Vision Transformer: A Survey
Zherui Zhang
Rongtao Xu
Jie Zhou
Changwei Wang
Xingtian Pei
...
Jiguang Zhang
Li Guo
Longxiang Gao
W. Xu
Shibiao Xu
ViT
128
0
0
06 May 2025
Revisiting Transformers through the Lens of Low Entropy and Dynamic Sparsity
Revisiting Transformers through the Lens of Low Entropy and Dynamic Sparsity
Ruifeng Ren
Yong Liu
114
0
0
26 Apr 2025
EPSILON: Adaptive Fault Mitigation in Approximate Deep Neural Network using Statistical Signatures
EPSILON: Adaptive Fault Mitigation in Approximate Deep Neural Network using Statistical Signatures
Khurram Khalil
K. A. Hoque
AAML
78
0
0
24 Apr 2025
Towards Understanding How Knowledge Evolves in Large Vision-Language Models
Towards Understanding How Knowledge Evolves in Large Vision-Language Models
Sudong Wang
Y. Zhang
Yao Zhu
Jianing Li
Zizhe Wang
Y. Liu
Xiangyang Ji
120
0
0
31 Mar 2025
ETO:Efficient Transformer-based Local Feature Matching by Organizing Multiple Homography Hypotheses
ETO:Efficient Transformer-based Local Feature Matching by Organizing Multiple Homography Hypotheses
Junjie Ni
Guofeng Zhang
Guanglin Li
Yijin Li
Xinyang Liu
Zhaoyang Huang
Hujun Bao
ViT
63
2
0
30 Oct 2024
Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA
Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA
Sangmin Bae
Adam Fisch
Hrayr Harutyunyan
Ziwei Ji
Seungyeon Kim
Tal Schuster
KELM
76
5
0
28 Oct 2024
MrT5: Dynamic Token Merging for Efficient Byte-level Language Models
MrT5: Dynamic Token Merging for Efficient Byte-level Language Models
Julie Kallini
Shikhar Murty
Christopher D. Manning
Christopher Potts
Róbert Csordás
32
2
0
28 Oct 2024
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
Chenxi Wang
Xiang Chen
N. Zhang
Bozhong Tian
Haoming Xu
Shumin Deng
H. Chen
MLLM
LRM
29
4
0
15 Oct 2024
Hyper-multi-step: The Truth Behind Difficult Long-context Tasks
Hyper-multi-step: The Truth Behind Difficult Long-context Tasks
Yijiong Yu
Ma Xiufa
Fang Jianwei
Zhi-liang Xu
Su Guangyao
...
Zhixiao Qi
Wei Wang
W. Liu
Ran Chen
Ji Pei
LRM
RALM
27
0
0
06 Oct 2024
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Xin Zou
Yizhou Wang
Yibo Yan
Yuanhuiyi Lyu
Kening Zheng
...
Junkai Chen
Peijie Jiang
J. Liu
Chang Tang
Xuming Hu
81
7
0
04 Oct 2024
PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined
  Speculation
PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined Speculation
Branden Butler
Sixing Yu
Arya Mazaheri
Ali Jannesari
LRM
42
6
0
16 Jul 2024
ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models
ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models
Yash Akhauri
Ahmed F. AbouElhamayed
Jordan Dotzel
Zhiru Zhang
Alexander M Rush
Safeen Huda
Mohamed S. Abdelfattah
18
2
0
24 Jun 2024
On the Impact of Black-box Deployment Strategies for Edge AI on Latency and Model Performance
On the Impact of Black-box Deployment Strategies for Edge AI on Latency and Model Performance
Jaskirat Singh
Emad Fallahzadeh
Bram Adams
Ahmed E. Hassan
MQ
32
3
0
25 Mar 2024
Investigating Recurrent Transformers with Dynamic Halt
Investigating Recurrent Transformers with Dynamic Halt
Jishnu Ray Chowdhury
Cornelia Caragea
37
1
0
01 Feb 2024
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language
  Models with 3D Parallelism
EE-LLM: Large-Scale Training and Inference of Early-Exit Large Language Models with 3D Parallelism
Yanxi Chen
Xuchen Pan
Yaliang Li
Bolin Ding
Jingren Zhou
LRM
33
31
0
08 Dec 2023
Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention
  Graph in Pre-Trained Transformers
Zero-TPrune: Zero-Shot Token Pruning through Leveraging of the Attention Graph in Pre-Trained Transformers
Hongjie Wang
Bhishma Dedhia
N. Jha
ViT
VLM
36
26
0
27 May 2023
Can Public Large Language Models Help Private Cross-device Federated
  Learning?
Can Public Large Language Models Help Private Cross-device Federated Learning?
Boxin Wang
Yibo Zhang
Yuan Cao
Bo-wen Li
H. B. McMahan
Sewoong Oh
Zheng Xu
Manzil Zaheer
FedML
21
37
0
20 May 2023
Adaptive Deep Neural Network Inference Optimization with EENet
Adaptive Deep Neural Network Inference Optimization with EENet
Fatih Ilhan
Ka-Ho Chow
Sihao Hu
Tiansheng Huang
Selim Tekin
...
Myungjin Lee
Ramana Rao Kompella
Hugo Latapie
Gan Liu
Ling Liu
26
11
0
15 Jan 2023
Fast Inference from Transformers via Speculative Decoding
Fast Inference from Transformers via Speculative Decoding
Yaniv Leviathan
Matan Kalman
Yossi Matias
LRM
44
618
0
30 Nov 2022
ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and
  Effective Text Generation
ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and Effective Text Generation
Junyi Li
Tianyi Tang
Wayne Xin Zhao
J. Nie
Ji-Rong Wen
22
17
0
24 Oct 2022
Unsupervised Early Exit in DNNs with Multiple Exits
Unsupervised Early Exit in DNNs with Multiple Exits
U. HariNarayanN
M. Hanawal
Avinash Bhardwaj
29
10
0
20 Sep 2022
Efficient Methods for Natural Language Processing: A Survey
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
28
109
0
31 Aug 2022
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts
  in the Vocabulary Space
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
Mor Geva
Avi Caciularu
Ke Wang
Yoav Goldberg
KELM
43
333
0
28 Mar 2022
Token Dropping for Efficient BERT Pretraining
Token Dropping for Efficient BERT Pretraining
Le Hou
Richard Yuanzhe Pang
Tianyi Zhou
Yuexin Wu
Xinying Song
Xiaodan Song
Denny Zhou
22
42
0
24 Mar 2022
A Simple Hash-Based Early Exiting Approach For Language Understanding
  and Generation
A Simple Hash-Based Early Exiting Approach For Language Understanding and Generation
Tianxiang Sun
Xiangyang Liu
Wei-wei Zhu
Zhichao Geng
Lingling Wu
Yilong He
Yuan Ni
Guotong Xie
Xuanjing Huang
Xipeng Qiu
23
40
0
03 Mar 2022
Towards Efficient NLP: A Standard Evaluation and A Strong Baseline
Towards Efficient NLP: A Standard Evaluation and A Strong Baseline
Xiangyang Liu
Tianxiang Sun
Junliang He
Jiawen Wu
Lingling Wu
Xinyu Zhang
Hao Jiang
Zhao Cao
Xuanjing Huang
Xipeng Qiu
ELM
21
46
0
13 Oct 2021
Beyond Distillation: Task-level Mixture-of-Experts for Efficient
  Inference
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Sneha Kudugunta
Yanping Huang
Ankur Bapna
M. Krikun
Dmitry Lepikhin
Minh-Thang Luong
Orhan Firat
MoE
119
106
0
24 Sep 2021
Stateful ODE-Nets using Basis Function Expansions
Stateful ODE-Nets using Basis Function Expansions
A. Queiruga
N. Benjamin Erichson
Liam Hodgkinson
Michael W. Mahoney
27
16
0
21 Jun 2021
Accelerating BERT Inference for Sequence Labeling via Early-Exit
Accelerating BERT Inference for Sequence Labeling via Early-Exit
Xiaonan Li
Yunfan Shao
Tianxiang Sun
Hang Yan
Xipeng Qiu
Xuanjing Huang
16
40
0
28 May 2021
Single-Layer Vision Transformers for More Accurate Early Exits with Less
  Overhead
Single-Layer Vision Transformers for More Accurate Early Exits with Less Overhead
Arian Bakhtiarnia
Qi Zhang
Alexandros Iosifidis
25
35
0
19 May 2021
Attention for Image Registration (AiR): an unsupervised Transformer
  approach
Attention for Image Registration (AiR): an unsupervised Transformer approach
Zihao W. Wang
H. Delingette
ViT
MedIm
25
7
0
05 May 2021
Split Computing and Early Exiting for Deep Learning Applications: Survey
  and Research Challenges
Split Computing and Early Exiting for Deep Learning Applications: Survey and Research Challenges
Yoshitomo Matsubara
Marco Levorato
Francesco Restuccia
22
199
0
08 Mar 2021
Improving Anytime Prediction with Parallel Cascaded Networks and a
  Temporal-Difference Loss
Improving Anytime Prediction with Parallel Cascaded Networks and a Temporal-Difference Loss
Michael L. Iuzzolino
Michael C. Mozer
Samy Bengio
OOD
10
10
0
19 Feb 2021
Reservoir Transformers
Reservoir Transformers
Sheng Shen
Alexei Baevski
Ari S. Morcos
Kurt Keutzer
Michael Auli
Douwe Kiela
27
17
0
30 Dec 2020
Language Models not just for Pre-training: Fast Online Neural Noisy
  Channel Modeling
Language Models not just for Pre-training: Fast Online Neural Noisy Channel Modeling
Shruti Bhosale
Kyra Yee
Sergey Edunov
Michael Auli
50
7
0
13 Nov 2020
AdapterDrop: On the Efficiency of Adapters in Transformers
AdapterDrop: On the Efficiency of Adapters in Transformers
Andreas Rucklé
Gregor Geigle
Max Glockner
Tilman Beck
Jonas Pfeiffer
Nils Reimers
Iryna Gurevych
46
254
0
22 Oct 2020
GShard: Scaling Giant Models with Conditional Computation and Automatic
  Sharding
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding
Dmitry Lepikhin
HyoukJoong Lee
Yuanzhong Xu
Dehao Chen
Orhan Firat
Yanping Huang
M. Krikun
Noam M. Shazeer
Z. Chen
MoE
18
1,106
0
30 Jun 2020
The Right Tool for the Job: Matching Model and Instance Complexities
The Right Tool for the Job: Matching Model and Instance Complexities
Roy Schwartz
Gabriel Stanovsky
Swabha Swayamdipta
Jesse Dodge
Noah A. Smith
33
167
0
16 Apr 2020
Pre-trained Models for Natural Language Processing: A Survey
Pre-trained Models for Natural Language Processing: A Survey
Xipeng Qiu
Tianxiang Sun
Yige Xu
Yunfan Shao
Ning Dai
Xuanjing Huang
LM&MA
VLM
243
1,450
0
18 Mar 2020
Classical Structured Prediction Losses for Sequence to Sequence Learning
Classical Structured Prediction Losses for Sequence to Sequence Learning
Sergey Edunov
Myle Ott
Michael Auli
David Grangier
MarcÁurelio Ranzato
AIMat
48
185
0
14 Nov 2017
1