Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2105.14839
Cited By
v1
v2 (latest)
Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing
31 May 2021
David Peer
Sebastian Stabinger
Stefan Engl
A. Rodríguez-Sánchez
Re-assign community
ArXiv (abs)
PDF
HTML
Github (7★)
Papers citing
"Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing"
4 / 4 papers shown
Title
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Shuzhou Yuan
Ercong Nie
Bolei Ma
Michael Farber
103
3
0
18 Feb 2024
CoMFLP: Correlation Measure based Fast Search on ASR Layer Pruning
W. Liu
Zhiyuan Peng
Tan Lee
52
2
0
21 Sep 2023
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification
Anastasiia Grishina
Max Hort
Leon Moonen
46
6
0
08 May 2023
Gradient-Free Structured Pruning with Unlabeled Data
Azade Nova
H. Dai
Dale Schuurmans
SyDa
77
22
0
07 Mar 2023
1