ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXivPDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 594 papers shown
Title
Investigating the Effect of Network Pruning on Performance and Interpretability
Investigating the Effect of Network Pruning on Performance and Interpretability
Jonathan von Rad
Florian Seuffert
28
1
0
29 Sep 2024
Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment
Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment
Naoya Hasegawa
Issei Sato
38
0
0
26 Sep 2024
Training Neural Networks for Modularity aids Interpretability
Training Neural Networks for Modularity aids Interpretability
Satvik Golechha
Dylan R. Cope
Nandi Schoots
25
0
0
24 Sep 2024
On Importance of Pruning and Distillation for Efficient Low Resource NLP
On Importance of Pruning and Distillation for Efficient Low Resource NLP
Aishwarya Mirashi
Purva Lingayat
Srushti Sonavane
Tejas Padhiyar
Raviraj Joshi
Geetanjali Kale
26
1
0
21 Sep 2024
Monomial Matrix Group Equivariant Neural Functional Networks
Monomial Matrix Group Equivariant Neural Functional Networks
Hoang V. Tran
Thieu N. Vo
Tho H. Tran
An T. Nguyen
Tan M. Nguyen
54
5
0
18 Sep 2024
S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training
S-STE: Continuous Pruning Function for Efficient 2:4 Sparse Pre-training
Yuezhou Hu
Jun-Jie Zhu
Jianfei Chen
38
0
0
13 Sep 2024
Self-Masking Networks for Unsupervised Adaptation
Self-Masking Networks for Unsupervised Adaptation
Alfonso Taboada Warmerdam
Mathilde Caron
Yuki M. Asano
43
1
0
11 Sep 2024
HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning
HESSO: Towards Automatic Efficient and User Friendly Any Neural Network Training and Pruning
Tianyi Chen
Xiaoyi Qu
David Aponte
Colby R. Banbury
Jongwoo Ko
Tianyu Ding
Yong Ma
Vladimir Lyapunov
Ilya Zharkov
Luming Liang
80
1
0
11 Sep 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
47
3
0
19 Aug 2024
Compress and Compare: Interactively Evaluating Efficiency and Behavior
  Across ML Model Compression Experiments
Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments
Angie Boggust
Venkatesh Sivaraman
Yannick Assogba
Donghao Ren
Dominik Moritz
Fred Hohman
VLM
52
3
0
06 Aug 2024
ThinK: Thinner Key Cache by Query-Driven Pruning
ThinK: Thinner Key Cache by Query-Driven Pruning
Yuhui Xu
Zhanming Jie
Hanze Dong
Lei Wang
Xudong Lu
Aojun Zhou
Amrita Saha
Caiming Xiong
Doyen Sahoo
67
14
0
30 Jul 2024
LORTSAR: Low-Rank Transformer for Skeleton-based Action Recognition
LORTSAR: Low-Rank Transformer for Skeleton-based Action Recognition
Soroush Oraki
Harry Zhuang
Jie Liang
49
1
0
19 Jul 2024
The Impact of Quantization and Pruning on Deep Reinforcement Learning
  Models
The Impact of Quantization and Pruning on Deep Reinforcement Learning Models
Heng Lu
Mehdi Alemi
Reza Rawassizadeh
34
1
0
05 Jul 2024
SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning
SAFT: Towards Out-of-Distribution Generalization in Fine-Tuning
Bac Nguyen
Stefan Uhlich
Fabien Cardinaux
Lukas Mauch
Marzieh Edraki
Aaron Courville
OODD
CLL
VLM
54
3
0
03 Jul 2024
LPViT: Low-Power Semi-structured Pruning for Vision Transformers
LPViT: Low-Power Semi-structured Pruning for Vision Transformers
Kaixin Xu
Zhe Wang
Chunyun Chen
Xue Geng
Jie Lin
Xulei Yang
Min-man Wu
Min Wu
Xiaoli Li
Weisi Lin
ViT
VLM
49
7
0
02 Jul 2024
Neural Networks Trained by Weight Permutation are Universal Approximators
Neural Networks Trained by Weight Permutation are Universal Approximators
Yongqiang Cai
Gaohang Chen
Zhonghua Qiao
69
1
0
01 Jul 2024
Trimming the Fat: Efficient Compression of 3D Gaussian Splats through
  Pruning
Trimming the Fat: Efficient Compression of 3D Gaussian Splats through Pruning
Muhammad Salman Ali
Maryam Qamar
Sung-Ho Bae
Enzo Tartaglione
3DGS
40
11
0
26 Jun 2024
A Thorough Performance Benchmarking on Lightweight Embedding-based Recommender Systems
A Thorough Performance Benchmarking on Lightweight Embedding-based Recommender Systems
Hung Vinh Tran
Tong Chen
Quoc Viet Hung Nguyen
Zi-Rui Huang
Lizhen Cui
Hongzhi Yin
41
1
0
25 Jun 2024
ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models
ShadowLLM: Predictor-based Contextual Sparsity for Large Language Models
Yash Akhauri
Ahmed F. AbouElhamayed
Jordan Dotzel
Zhiru Zhang
Alexander M Rush
Safeen Huda
Mohamed S. Abdelfattah
18
2
0
24 Jun 2024
Towards Lightweight Graph Neural Network Search with Curriculum Graph
  Sparsification
Towards Lightweight Graph Neural Network Search with Curriculum Graph Sparsification
Beini Xie
Heng Chang
Ziwei Zhang
Zeyang Zhang
Simin Wu
Xin Wang
Yuan Meng
Wenwu Zhu
29
2
0
24 Jun 2024
Geometric sparsification in recurrent neural networks
Geometric sparsification in recurrent neural networks
Wyatt Mackey
Ioannis Schizas
Jared Deighton
David L. Boothe, Jr.
Vasileios Maroulas
28
0
0
10 Jun 2024
Evaluating Zero-Shot Long-Context LLM Compression
Evaluating Zero-Shot Long-Context LLM Compression
Chenyu Wang
Yihan Wang
Kai Li
49
0
0
10 Jun 2024
Designs for Enabling Collaboration in Human-Machine Teaming via
  Interactive and Explainable Systems
Designs for Enabling Collaboration in Human-Machine Teaming via Interactive and Explainable Systems
Rohan R. Paleja
Michael Munje
K. Chang
Reed Jensen
Matthew C. Gombolay
39
2
0
07 Jun 2024
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
Feature contamination: Neural networks learn uncorrelated features and fail to generalize
Tianren Zhang
Chujie Zhao
Guanyu Chen
Yizhou Jiang
Feng Chen
OOD
MLT
OODD
77
3
0
05 Jun 2024
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Effective Interplay between Sparsity and Quantization: From Theory to Practice
Simla Burcu Harma
Ayan Chakraborty
Elizaveta Kostenok
Danila Mishin
Dongho Ha
...
Martin Jaggi
Ming Liu
Yunho Oh
Suvinay Subramanian
Amir Yazdanbakhsh
MQ
44
5
0
31 May 2024
Dual sparse training framework: inducing activation map sparsity via
  Transformed $\ell1$ regularization
Dual sparse training framework: inducing activation map sparsity via Transformed ℓ1\ell1ℓ1 regularization
Xiaolong Yu
Cong Tian
44
0
0
30 May 2024
Survival of the Fittest Representation: A Case Study with Modular
  Addition
Survival of the Fittest Representation: A Case Study with Modular Addition
Xiaoman Delores Ding
Zifan Carl Guo
Eric J. Michaud
Ziming Liu
Max Tegmark
48
3
0
27 May 2024
Scorch: A Library for Sparse Deep Learning
Scorch: A Library for Sparse Deep Learning
Bobby Yan
Alexander J. Root
Trevor Gale
David Broman
Fredrik Kjolstad
30
0
0
27 May 2024
Combining Relevance and Magnitude for Resource-Aware DNN Pruning
Combining Relevance and Magnitude for Resource-Aware DNN Pruning
C. Chiasserini
F. Malandrino
Nuria Molner
Zhiqiang Zhao
21
0
0
21 May 2024
Knowledge Graph Pruning for Recommendation
Knowledge Graph Pruning for Recommendation
Fake Lin
Xi Zhu
Ziwei Zhao
Deqiang Huang
Yu Yu
Xueying Li
Zhi Zheng
Tong Bill Xu
Enhong Chen
29
0
0
19 May 2024
A Systematic Evaluation of Large Language Models for Natural Language
  Generation Tasks
A Systematic Evaluation of Large Language Models for Natural Language Generation Tasks
Xuanfan Ni
Piji Li
ELM
LRM
28
8
0
16 May 2024
Neural Network Compression for Reinforcement Learning Tasks
Neural Network Compression for Reinforcement Learning Tasks
Dmitry A. Ivanov
D. Larionov
Oleg V. Maslennikov
V. Voevodin
OffRL
AI4CE
48
0
0
13 May 2024
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity
  Allocation with Global Constraint in Minutes
Fast and Controllable Post-training Sparsity: Learning Optimal Sparsity Allocation with Global Constraint in Minutes
Ruihao Gong
Yang Yong
Zining Wang
Jinyang Guo
Xiuying Wei
Yuqing Ma
Xianglong Liu
33
5
0
09 May 2024
Efficient and Flexible Method for Reducing Moderate-size Deep Neural
  Networks with Condensation
Efficient and Flexible Method for Reducing Moderate-size Deep Neural Networks with Condensation
Tianyi Chen
Zhi-Qin John Xu
40
1
0
02 May 2024
Leveraging Active Subspaces to Capture Epistemic Model Uncertainty in
  Deep Generative Models for Molecular Design
Leveraging Active Subspaces to Capture Epistemic Model Uncertainty in Deep Generative Models for Molecular Design
A. N. M. N. Abeer
Sanket R. Jantre
Nathan M. Urban
Byung-Jun Yoon
44
1
0
30 Apr 2024
Hidden Synergy: $L_1$ Weight Normalization and 1-Path-Norm
  Regularization
Hidden Synergy: L1L_1L1​ Weight Normalization and 1-Path-Norm Regularization
Aditya Biswas
38
0
0
29 Apr 2024
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at
  Initialization
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization
Bailey J. Eccles
Leon Wong
Blesson Varghese
33
2
0
22 Apr 2024
FedSelect: Personalized Federated Learning with Customized Selection of
  Parameters for Fine-Tuning
FedSelect: Personalized Federated Learning with Customized Selection of Parameters for Fine-Tuning
Rishub Tamirisa
Chulin Xie
Wenxuan Bao
Andy Zhou
Ron Arel
Aviv Shamsian
33
6
0
03 Apr 2024
The Unreasonable Ineffectiveness of the Deeper Layers
The Unreasonable Ineffectiveness of the Deeper Layers
Andrey Gromov
Kushal Tirumala
Hassan Shapourian
Paolo Glorioso
Daniel A. Roberts
43
79
0
26 Mar 2024
Graph Expansion in Pruned Recurrent Neural Network Layers Preserve
  Performance
Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance
Suryam Arnav Kalra
Arindam Biswas
Pabitra Mitra
Biswajit Basu
GNN
38
0
0
17 Mar 2024
Merging Text Transformer Models from Different Initializations
Merging Text Transformer Models from Different Initializations
Neha Verma
Maha Elbayad
MoMe
59
7
0
01 Mar 2024
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for
  Large Language Models
NeuroPrune: A Neuro-inspired Topological Sparse Training Algorithm for Large Language Models
Amit Dhurandhar
Tejaswini Pedapati
Ronny Luss
Soham Dan
Aurélie C. Lozano
Payel Das
Georgios Kollias
22
3
0
28 Feb 2024
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
SequentialAttention++ for Block Sparsification: Differentiable Pruning Meets Combinatorial Optimization
T. Yasuda
Kyriakos Axiotis
Gang Fu
M. Bateni
Vahab Mirrokni
41
0
0
27 Feb 2024
Towards Meta-Pruning via Optimal Transport
Towards Meta-Pruning via Optimal Transport
Alexander Theus
Olin Geimer
Friedrich Wicke
Thomas Hofmann
Sotiris Anagnostidis
Sidak Pal Singh
MoMe
21
3
0
12 Feb 2024
Continual Learning on Graphs: A Survey
Continual Learning on Graphs: A Survey
Zonggui Tian
Duanhao Zhang
Hong-Ning Dai
45
5
0
09 Feb 2024
Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes
Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes
Lucio Dery
Steven Kolawole
Jean-Francois Kagey
Virginia Smith
Graham Neubig
Ameet Talwalkar
39
28
0
08 Feb 2024
Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching: With Insights into Other Permutation Search Methods
Analysis of Linear Mode Connectivity via Permutation-Based Weight Matching: With Insights into Other Permutation Search Methods
Akira Ito
Masanori Yamada
Atsutoshi Kumagai
MoMe
61
5
0
06 Feb 2024
Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward
Faster and Lighter LLMs: A Survey on Current Challenges and Way Forward
Arnav Chavan
Raghav Magazine
Shubham Kushwaha
M. Debbah
Deepak Gupta
16
18
0
02 Feb 2024
Manipulating Sparse Double Descent
Manipulating Sparse Double Descent
Ya Shi Zhang
19
0
0
19 Jan 2024
Stochastic Subnetwork Annealing: A Regularization Technique for Fine
  Tuning Pruned Subnetworks
Stochastic Subnetwork Annealing: A Regularization Technique for Fine Tuning Pruned Subnetworks
Tim Whitaker
Darrell Whitley
30
0
0
16 Jan 2024
Previous
12345...101112
Next