ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.03635
  4. Cited By
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
v1v2v3v4v5 (latest)

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

9 March 2018
Jonathan Frankle
Michael Carbin
ArXiv (abs)PDFHTML

Papers citing "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"

50 / 2,186 papers shown
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer
Low-Complexity Inference in Continual Learning via Compressed Knowledge Transfer
Zhenrong Liu
Janne M. J. Huttunen
Mikko Honkala
CLL
312
1
0
13 May 2025
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Efficient Unstructured Pruning of Mamba State-Space Models for Resource-Constrained Environments
Ibne Farabi Shihab
Sanjeda Akter
Anuj Sharma
Mamba
426
3
0
13 May 2025
ICE-Pruning: An Iterative Cost-Efficient Pruning Pipeline for Deep Neural Networks
ICE-Pruning: An Iterative Cost-Efficient Pruning Pipeline for Deep Neural Networks
Wenhao Hu
Paul Henderson
José Cano
355
0
0
12 May 2025
Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning Operators
Guiding Evolutionary AutoEncoder Training with Activation-Based Pruning OperatorsAnnual Conference on Genetic and Evolutionary Computation (GECCO), 2025
Steven Jorgensen
Erik Hemberg
J. Toutouh
Una-May O’Reilly
214
0
0
08 May 2025
Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry
Sparse Training from Random Initialization: Aligning Lottery Ticket Masks using Weight Symmetry
Mohammed Adnan
Rohan Jain
Ekansh Sharma
Rahul Krishnan
Yani Andrew Ioannou
307
1
0
08 May 2025
How to Train Your Metamorphic Deep Neural Network
How to Train Your Metamorphic Deep Neural Network
Thomas Sommariva
Simone Calderara
Angelo Porrello
233
0
0
07 May 2025
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression Techniques
Optimizing LLMs for Resource-Constrained Environments: A Survey of Model Compression TechniquesAnnual International Computer Software and Applications Conference (COMPSAC), 2025
Sanjay Surendranath Girija
Shashank Kapoor
Lakshit Arora
Dipen Pradhan
Aman Raj
Ankit Shetgaonkar
366
8
0
05 May 2025
PASCAL: Precise and Efficient ANN- SNN Conversion using Spike Accumulation and Adaptive Layerwise Activation
PASCAL: Precise and Efficient ANN- SNN Conversion using Spike Accumulation and Adaptive Layerwise Activation
Pranav Ramesh
Gopalakrishnan Srinivasan
245
2
0
03 May 2025
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Efficient Shapley Value-based Non-Uniform Pruning of Large Language Models
Chuan Sun
Han Yu
Lizhen Cui
Xiaoxiao Li
1.0K
5
0
03 May 2025
FineScope : Precision Pruning for Domain-Specialized Large Language Models Using SAE-Guided Self-Data Cultivation
FineScope : Precision Pruning for Domain-Specialized Large Language Models Using SAE-Guided Self-Data Cultivation
Chaitali Bhattacharyya
Hyunsei Lee
Junyoung Lee
Shinhyoung Jang
Il hong Suh
Yeseong Kim
305
1
0
01 May 2025
Sparse-to-Sparse Training of Diffusion Models
Sparse-to-Sparse Training of Diffusion Models
Inês Cardoso Oliveira
Decebal Constantin Mocanu
Luis A. Leiva
DiffM
453
1
0
30 Apr 2025
GDI-Bench: A Benchmark for General Document Intelligence with Vision and Reasoning Decoupling
GDI-Bench: A Benchmark for General Document Intelligence with Vision and Reasoning Decoupling
Siqi Li
Yufan Shen
Xiangnan Chen
Jiayi Chen
Hengwei Ju
...
Botian Shi
Y. Liu
Xinyu Cai
Yu Qiao
Yu Qiao
VLMELM
581
2
0
30 Apr 2025
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Model Connectomes: A Generational Approach to Data-Efficient Language Models
Klemen Kotar
Greta Tuckute
362
0
0
29 Apr 2025
TeleSparse: Practical Privacy-Preserving Verification of Deep Neural Networks
TeleSparse: Practical Privacy-Preserving Verification of Deep Neural NetworksProceedings on Privacy Enhancing Technologies (PoPETs), 2025
Mohammad Maheri
Hamed Haddadi
Alex Davidson
330
5
0
27 Apr 2025
Communication-Efficient Personalized Distributed Learning with Data and Node Heterogeneity
Communication-Efficient Personalized Distributed Learning with Data and Node HeterogeneityIEEE Transactions on Cognitive Communications and Networking (TCCN), 2025
Zhuojun Tian
Zhaoyang Zhang
Yiwei Li
Mehdi Bennis
356
1
0
24 Apr 2025
Efficient Adaptation of Deep Neural Networks for Semantic Segmentation in Space Applications
Efficient Adaptation of Deep Neural Networks for Semantic Segmentation in Space ApplicationsScientific Reports (Sci Rep), 2025
Leonardo Olivi
Edoardo Santero Mormile
Enzo Tartaglione
SSeg
318
0
0
22 Apr 2025
Connecting Parameter Magnitudes and Hessian Eigenspaces at Scale using Sketched Methods
Connecting Parameter Magnitudes and Hessian Eigenspaces at Scale using Sketched Methods
Andres Fernandez
Frank Schneider
Maren Mahsereci
Philipp Hennig
342
1
0
20 Apr 2025
Parameter-Efficient Continual Fine-Tuning: A Survey
Parameter-Efficient Continual Fine-Tuning: A Survey
Eric Nuertey Coleman
Luigi Quarantiello
Ziyue Liu
Qinwen Yang
Samrat Mukherjee
J. Hurtado
Vincenzo Lomonaco
CLL
346
6
0
18 Apr 2025
Hadamard product in deep learning: Introduction, Advances and Challenges
Hadamard product in deep learning: Introduction, Advances and ChallengesIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2025
Grigorios G. Chrysos
Yongtao Wu
Razvan Pascanu
Philip Torr
Volkan Cevher
AAML
342
14
0
17 Apr 2025
Enhanced Pruning Strategy for Multi-Component Neural Architectures Using Component-Aware Graph Analysis
Enhanced Pruning Strategy for Multi-Component Neural Architectures Using Component-Aware Graph Analysis
Ganesh Sundaram
Jonas Ulmen
Daniel Görges
245
1
0
17 Apr 2025
Sign-In to the Lottery: Reparameterizing Sparse Training From Scratch
Sign-In to the Lottery: Reparameterizing Sparse Training From Scratch
Advait Gadhikar
Tom Jacobs
Chao Zhou
R. Burkholz
384
1
0
17 Apr 2025
Collaborative Learning of On-Device Small Model and Cloud-Based Large Model: Advances and Future Directions
Collaborative Learning of On-Device Small Model and Cloud-Based Large Model: Advances and Future Directions
Chaoyue Niu
Yucheng Ding
Junhui Lu
Zhengxiang Huang
Hang Zeng
Yutong Dai
Xuezhen Tu
Chengfei Lv
Fan Wu
Guihai Chen
331
2
0
17 Apr 2025
Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted Concepts
Set You Straight: Auto-Steering Denoising Trajectories to Sidestep Unwanted Concepts
Leyang Li
Shilin Lu
Yan Ren
A. Kong
DiffM
326
42
0
17 Apr 2025
You Don't Need All Attentions: Distributed Dynamic Fine-Tuning for Foundation Models
You Don't Need All Attentions: Distributed Dynamic Fine-Tuning for Foundation Models
Shiwei Ding
Lan Zhang
Zhenlin Wang
Giuseppe Ateniese
Xiaoyong Yuan
229
0
0
16 Apr 2025
Learning Compatible Multi-Prize Subnetworks for Asymmetric Retrieval
Learning Compatible Multi-Prize Subnetworks for Asymmetric RetrievalComputer Vision and Pattern Recognition (CVPR), 2025
Yushuai Sun
Zikun Zhou
Shihong Deng
Yaowei Wang
Jun Yu
Guangming Lu
Wenjie Pei
256
0
0
16 Apr 2025
Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding
Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding
Francesca Rivelli
Martin Popov
Charalampos Kouzinopoulos
Guangzhi Tang
184
1
0
15 Apr 2025
CUT: Pruning Pre-Trained Multi-Task Models into Compact Models for Edge Devices
CUT: Pruning Pre-Trained Multi-Task Models into Compact Models for Edge DevicesInternational Conference on Intelligent Computing (ICIC), 2025
Jingxuan Zhou
Weidong Bao
Ji Wang
Zhengyi Zhong
217
0
0
14 Apr 2025
Early-Bird Diffusion: Investigating and Leveraging Timestep-Aware Early-Bird Tickets in Diffusion Models for Efficient Training
Early-Bird Diffusion: Investigating and Leveraging Timestep-Aware Early-Bird Tickets in Diffusion Models for Efficient TrainingComputer Vision and Pattern Recognition (CVPR), 2025
Lexington Whalen
Zhenbang Du
Haoran You
Chaojian Li
Sixu Li
Yingyan
381
2
0
13 Apr 2025
Evolved Hierarchical Masking for Self-Supervised Learning
Evolved Hierarchical Masking for Self-Supervised LearningIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024
Zhanzhou Feng
Shiliang Zhang
363
1
0
12 Apr 2025
Identifying Key Challenges of Hardness-Based Resampling
Identifying Key Challenges of Hardness-Based Resampling
Pawel Pukowski
Venet Osmani
278
0
0
09 Apr 2025
SparsyFed: Sparse Adaptive Federated Training
SparsyFed: Sparse Adaptive Federated Training
Adriano Guastella
Lorenzo Sani
Alex Iacob
Alessio Mora
Paolo Bellavista
Nicholas D. Lane
FedML
404
0
0
07 Apr 2025
Few Dimensions are Enough: Fine-tuning BERT with Selected Dimensions Revealed Its Redundant Nature
Few Dimensions are Enough: Fine-tuning BERT with Selected Dimensions Revealed Its Redundant Nature
Shion Fukuhata
Yoshinobu Kano
206
1
0
07 Apr 2025
The Neural Pruning Law Hypothesis
The Neural Pruning Law Hypothesis
Eugen Barbulescu
Antonio Alexoaie
Lucian Busoniu
371
0
0
06 Apr 2025
Thanos: A Block-wise Pruning Algorithm for Efficient Large Language Model Compression
Thanos: A Block-wise Pruning Algorithm for Efficient Large Language Model Compression
Ivan Ilin
Peter Richtárik
163
5
0
06 Apr 2025
The Effects of Grouped Structural Global Pruning of Vision Transformers on Domain Generalisation
The Effects of Grouped Structural Global Pruning of Vision Transformers on Domain Generalisation
Hamza Riaz
Alan F. Smeaton
ViT
226
0
0
05 Apr 2025
Efficient Model Editing with Task-Localized Sparse Fine-tuning
Efficient Model Editing with Task-Localized Sparse Fine-tuningInternational Conference on Learning Representations (ICLR), 2025
Leonardo Iurada
Marco Ciccone
Tatiana Tommasi
KELMMoMe
347
10
0
03 Apr 2025
FedPaI: Achieving Extreme Sparsity in Federated Learning via Pruning at Initialization
FedPaI: Achieving Extreme Sparsity in Federated Learning via Pruning at Initialization
Haonan Wang
Ziqiang Liu
Kajimusugura Hoshino
Tuo Zhang
J. Walters
S. Crago
276
0
0
01 Apr 2025
SQuat: Subspace-orthogonal KV Cache Quantization
SQuat: Subspace-orthogonal KV Cache Quantization
Hao Wang
Ligong Han
Kai Xu
Akash Srivastava
MQ
370
2
0
31 Mar 2025
Model Hemorrhage and the Robustness Limits of Large Language Models
Model Hemorrhage and the Robustness Limits of Large Language Models
Ziyang Ma
Hui Yuan
Guang Dai
Gui-Song Xia
Bo Du
Liangpei Zhang
Dacheng Tao
317
1
0
31 Mar 2025
STADE: Standard Deviation as a Pruning Metric
STADE: Standard Deviation as a Pruning Metric
Diego Coello de Portugal Mecke
Haya Alyoussef
Ilia Koloiarov
Ilia Koloiarov
Lars Schmidt-Thieme
Lars Schmidt-Thieme
305
0
0
28 Mar 2025
Almost Bayesian: The Fractal Dynamics of Stochastic Gradient Descent
Almost Bayesian: The Fractal Dynamics of Stochastic Gradient Descent
Max Hennick
Stijn De Baerdemacker
220
3
0
28 Mar 2025
MixFunn: A Neural Network for Differential Equations with Improved Generalization and Interpretability
MixFunn: A Neural Network for Differential Equations with Improved Generalization and Interpretability
T. S. Farias
Gubio Gomes de Lima
Jonas Maziero
Celso Jorge Villas-Boas
AI4CE
219
0
0
28 Mar 2025
As easy as PIE: understanding when pruning causes language models to disagree
As easy as PIE: understanding when pruning causes language models to disagreeNorth American Chapter of the Association for Computational Linguistics (NAACL), 2025
Pietro Tropeano
Maria Maistro
Tuukka Ruotsalo
Christina Lioma
246
0
0
27 Mar 2025
Boosting Large Language Models with Mask Fine-Tuning
Boosting Large Language Models with Mask Fine-Tuning
M. Zhang
Yue Bai
Huan Wang
Yizhou Wang
Qihua Dong
Y. Fu
CLL
239
2
0
27 Mar 2025
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning
Yupei Li
M. Milling
Björn Schuller
AI4CE
546
4
0
27 Mar 2025
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Generative Linguistics, Large Language Models, and the Social Nature of Scientific Success
Sophie Hao
ELMAI4CE
254
0
0
25 Mar 2025
MoST: Efficient Monarch Sparse Tuning for 3D Representation Learning
MoST: Efficient Monarch Sparse Tuning for 3D Representation LearningComputer Vision and Pattern Recognition (CVPR), 2025
Xu Han
Yuan Tang
Jinfeng Xu
Xianzhi Li
214
2
0
24 Mar 2025
On the Optimality of Single-label and Multi-label Neural Network Decoders
On the Optimality of Single-label and Multi-label Neural Network Decoders
Yunus Can Gültekin
Péter Scheepers
Yuncheng Yuan
Federico Corradi
Alex Alvarado
MQ
158
0
0
24 Mar 2025
Maximum Redundancy Pruning: A Principle-Driven Layerwise Sparsity Allocation for LLMs
Maximum Redundancy Pruning: A Principle-Driven Layerwise Sparsity Allocation for LLMs
Chang Gao
Kang Zhao
Runqi Wang
Jianfei Chen
Liping Jing
271
1
0
24 Mar 2025
Finding Stable Subnetworks at Initialization with Dataset Distillation
Finding Stable Subnetworks at Initialization with Dataset Distillation
Luke McDermott
Rahul Parhi
DD
342
0
0
23 Mar 2025
Previous
123456...424344
Next