ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2108.12594
  4. Cited By
Layer-wise Model Pruning based on Mutual Information

Layer-wise Model Pruning based on Mutual Information

Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021
28 August 2021
Chun Fan
Jiwei Li
Xiang Ao
Leilei Gan
Yuxian Meng
Xiaofei Sun
ArXiv (abs)PDFHTML

Papers citing "Layer-wise Model Pruning based on Mutual Information"

14 / 14 papers shown
MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling
MISA: Memory-Efficient LLMs Optimization with Module-wise Importance Sampling
Yuxi Liu
Renjia Deng
Yutong He
Xue Wang
Tao Yao
Kun Yuan
135
0
0
28 Oct 2025
Mix-QSAM: Mixed-Precision Quantization of the Segment Anything Model
Mix-QSAM: Mixed-Precision Quantization of the Segment Anything Model
Navin Ranjan
Andreas E. Savakis
MQVLM
354
0
0
08 May 2025
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
Attention Pruning: Automated Fairness Repair of Language Models via Surrogate Simulated Annealing
Vishnu Asutosh Dasu
Md Rafi Ur Rashid
Vipul Gupta
Saeid Tizpaz-Niari
Gang Tan
AAML
427
2
0
20 Mar 2025
Dynamic Low-Rank Sparse Adaptation for Large Language Models
Dynamic Low-Rank Sparse Adaptation for Large Language ModelsInternational Conference on Learning Representations (ICLR), 2025
Weizhong Huang
Yuxin Zhang
Xiawu Zheng
Wenshu Fan
Aiyue Chen
Yiwu Yao
Rongrong Ji
451
5
0
21 Feb 2025
How Redundant Is the Transformer Stack in Speech Representation Models?
How Redundant Is the Transformer Stack in Speech Representation Models?IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2024
Teresa Dorszewski
Albert Kjøller Jacobsen
Lenka Tětková
Lars Kai Hansen
347
2
0
20 Jan 2025
Layer-wise Importance Matters: Less Memory for Better Performance in
  Parameter-efficient Fine-tuning of Large Language Models
Layer-wise Importance Matters: Less Memory for Better Performance in Parameter-efficient Fine-tuning of Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2024
Kai Yao
P. Gao
Lichun Li
Yuan Zhao
Xiaofeng Wang
Wei Wang
Jianke Zhu
148
7
0
15 Oct 2024
Persistent Topological Features in Large Language Models
Persistent Topological Features in Large Language Models
Yuri Gardinazzi
Giada Panerai
Karthik Viswanathan
A. Ansuini
Alberto Cazzaniga
Matteo Biagetti
546
6
0
14 Oct 2024
MPruner: Optimizing Neural Network Size with CKA-Based Mutual
  Information Pruning
MPruner: Optimizing Neural Network Size with CKA-Based Mutual Information Pruning
Seungbeom Hu
ChanJun Park
Andrew Ferraiuolo
Sang-Ki Ko
Jinwoo Kim
Haein Song
Jieung Kim
339
2
0
24 Aug 2024
The Remarkable Robustness of LLMs: Stages of Inference?
The Remarkable Robustness of LLMs: Stages of Inference?
Vedang Lad
Wes Gurnee
Max Tegmark
Max Tegmark
510
86
0
27 Jun 2024
Large Language Model Pruning
Large Language Model Pruning
Hanjuan Huang
Hao-Jia Song
H. Pao
415
1
0
24 May 2024
The Unreasonable Ineffectiveness of the Deeper Layers
The Unreasonable Ineffectiveness of the Deeper Layers
Andrey Gromov
Kushal Tirumala
Hassan Shapourian
Paolo Glorioso
Daniel A. Roberts
425
157
0
26 Mar 2024
Fairness-Aware Structured Pruning in Transformers
Fairness-Aware Structured Pruning in Transformers
A. Zayed
Gonçalo Mordido
Samira Shabanian
Ioana Baldini
Sarath Chandar
235
30
0
24 Dec 2023
f-Divergence Minimization for Sequence-Level Knowledge Distillation
f-Divergence Minimization for Sequence-Level Knowledge DistillationAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Yuqiao Wen
Zichao Li
Wenyu Du
Lili Mou
267
81
0
27 Jul 2023
Pruning Pretrained Encoders with a Multitask Objective
Pruning Pretrained Encoders with a Multitask Objective
Patrick Xia
Richard Shin
127
0
0
10 Dec 2021
1