ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.14839
  4. Cited By
Greedy-layer Pruning: Speeding up Transformer Models for Natural
  Language Processing
v1v2 (latest)

Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing

31 May 2021
David Peer
Sebastian Stabinger
Stefan Engl
A. Rodríguez-Sánchez
ArXiv (abs)PDFHTMLGithub (7★)

Papers citing "Greedy-layer Pruning: Speeding up Transformer Models for Natural Language Processing"

3 / 3 papers shown
Title
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Why Lift so Heavy? Slimming Large Language Models by Cutting Off the Layers
Shuzhou Yuan
Ercong Nie
Bolei Ma
Michael Farber
103
3
0
18 Feb 2024
CoMFLP: Correlation Measure based Fast Search on ASR Layer Pruning
CoMFLP: Correlation Measure based Fast Search on ASR Layer Pruning
W. Liu
Zhiyuan Peng
Tan Lee
52
2
0
21 Sep 2023
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder
  Models for More Efficient Code Classification
The EarlyBIRD Catches the Bug: On Exploiting Early Layers of Encoder Models for More Efficient Code Classification
Anastasiia Grishina
Max Hort
Leon Moonen
46
6
0
08 May 2023
1