ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.12002
  4. Cited By
Rare Gems: Finding Lottery Tickets at Initialization

Rare Gems: Finding Lottery Tickets at Initialization

24 February 2022
Kartik K. Sreenivasan
Jy-yong Sohn
Liu Yang
Matthew Grinde
Alliot Nagle
Hongyi Wang
Eric P. Xing
Kangwook Lee
Dimitris Papailiopoulos
ArXivPDFHTML

Papers citing "Rare Gems: Finding Lottery Tickets at Initialization"

28 / 28 papers shown
Title
Fishing For Cheap And Efficient Pruners At Initialization
Fishing For Cheap And Efficient Pruners At Initialization
Ivo Gollini Navarrete
Nicolas Mauricio Cuadrado
Jose Renato Restom
Martin Takáč
Samuel Horvath
44
0
0
17 Feb 2025
Forward Once for All: Structural Parameterized Adaptation for Efficient Cloud-coordinated On-device Recommendation
Forward Once for All: Structural Parameterized Adaptation for Efficient Cloud-coordinated On-device Recommendation
Kairui Fu
Zheqi Lv
Shengyu Zhang
Fan Wu
Kun Kuang
35
0
0
07 Jan 2025
FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for
  Scalable Training
FRUGAL: Memory-Efficient Optimization by Reducing State Overhead for Scalable Training
Philip Zmushko
Aleksandr Beznosikov
Martin Takáč
Samuel Horváth
37
0
0
12 Nov 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
42
3
0
19 Aug 2024
Nerva: a Truly Sparse Implementation of Neural Networks
Nerva: a Truly Sparse Implementation of Neural Networks
Wieger Wesselink
Bram Grooten
Qiao Xiao
Cássio Machado de Campos
Mykola Pechenizkiy
27
0
0
24 Jul 2024
DIET: Customized Slimming for Incompatible Networks in Sequential
  Recommendation
DIET: Customized Slimming for Incompatible Networks in Sequential Recommendation
Kairui Fu
Shengyu Zhang
Zheqi Lv
Jingyuan Chen
Jiwei Li
26
2
0
13 Jun 2024
Optimal Eye Surgeon: Finding Image Priors through Sparse Generators at
  Initialization
Optimal Eye Surgeon: Finding Image Priors through Sparse Generators at Initialization
Avrajit Ghosh
Xitong Zhang
Kenneth K. Sun
Qing Qu
S. Ravishankar
Rongrong Wang
MedIm
35
5
0
07 Jun 2024
Optimal Recurrent Network Topologies for Dynamical Systems
  Reconstruction
Optimal Recurrent Network Topologies for Dynamical Systems Reconstruction
Christoph Jurgen Hemmer
Manuel Brenner
Florian Hess
Daniel Durstewitz
34
3
0
07 Jun 2024
Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large
  Language Models
Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models
Peijie Dong
Lujun Li
Zhenheng Tang
Xiang Liu
Xinglin Pan
Qiang-qiang Wang
Xiaowen Chu
58
23
0
05 Jun 2024
Nickel and Diming Your GAN: A Dual-Method Approach to Enhancing GAN
  Efficiency via Knowledge Distillation
Nickel and Diming Your GAN: A Dual-Method Approach to Enhancing GAN Efficiency via Knowledge Distillation
Sangyeop Yeo
Yoojin Jang
Jaejun Yoo
27
1
0
19 May 2024
No Free Prune: Information-Theoretic Barriers to Pruning at
  Initialization
No Free Prune: Information-Theoretic Barriers to Pruning at Initialization
Tanishq Kumar
Kevin Luo
Mark Sellke
33
3
0
02 Feb 2024
Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets
Multicoated and Folded Graph Neural Networks with Strong Lottery Tickets
Jiale Yan
Hiroaki Ito
Ángel López García-Arias
Yasuyuki Okoshi
Hikari Otsuka
Kazushi Kawamura
Thiem Van Chu
Masato Motomura
25
1
0
06 Dec 2023
Resource-constrained knowledge diffusion processes inspired by human
  peer learning
Resource-constrained knowledge diffusion processes inspired by human peer learning
Ehsan Beikihassan
Amy K. Hoover
Ioannis Koutis
Alipanah Parviz
Niloofar Aghaieabiane
37
0
0
01 Dec 2023
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Maestro: Uncovering Low-Rank Structures via Trainable Decomposition
Samuel Horváth
Stefanos Laskaridis
Shashank Rajput
Hongyi Wang
BDL
32
4
0
28 Aug 2023
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse
  Training
Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse Training
A. Nowak
Bram Grooten
D. Mocanu
Jacek Tabor
21
9
0
21 Jun 2023
How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint
How Sparse Can We Prune A Deep Network: A Fundamental Limit Viewpoint
Qiaozhe Zhang
Rui-qi Zhang
Jun Sun
Yingzhuang Liu
21
0
0
09 Jun 2023
One Network, Many Masks: Towards More Parameter-Efficient Transfer
  Learning
One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning
Guangtao Zeng
Peiyuan Zhang
Wei Lu
11
21
0
28 May 2023
Understanding Sparse Neural Networks from their Topology via
  Multipartite Graph Representations
Understanding Sparse Neural Networks from their Topology via Multipartite Graph Representations
Elia Cunegatti
Matteo Farina
Doina Bucur
Giovanni Iacca
35
1
0
26 May 2023
Cuttlefish: Low-Rank Model Training without All the Tuning
Cuttlefish: Low-Rank Model Training without All the Tuning
Hongyi Wang
Saurabh Agarwal
Pongsakorn U-chupala
Yoshiki Tanaka
Eric P. Xing
Dimitris Papailiopoulos
OffRL
56
21
0
04 May 2023
Pruning Before Training May Improve Generalization, Provably
Pruning Before Training May Improve Generalization, Provably
Hongru Yang
Yingbin Liang
Xiaojie Guo
Lingfei Wu
Zhangyang Wang
MLT
19
1
0
01 Jan 2023
Can We Find Strong Lottery Tickets in Generative Models?
Can We Find Strong Lottery Tickets in Generative Models?
Sangyeop Yeo
Yoojin Jang
Jy-yong Sohn
Dongyoon Han
Jaejun Yoo
13
6
0
16 Dec 2022
LOFT: Finding Lottery Tickets through Filter-wise Training
LOFT: Finding Lottery Tickets through Filter-wise Training
Qihan Wang
Chen Dun
Fangshuo Liao
C. Jermaine
Anastasios Kyrillidis
18
3
0
28 Oct 2022
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning
  Ticket's Mask?
Unmasking the Lottery Ticket Hypothesis: What's Encoded in a Winning Ticket's Mask?
Mansheej Paul
F. Chen
Brett W. Larsen
Jonathan Frankle
Surya Ganguli
Gintare Karolina Dziugaite
UQCV
25
38
0
06 Oct 2022
Why Random Pruning Is All We Need to Start Sparse
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
41
19
0
05 Oct 2022
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks
Hongru Yang
Zhangyang Wang
MLT
27
8
0
27 Mar 2022
The Lottery Ticket Hypothesis for Object Recognition
The Lottery Ticket Hypothesis for Object Recognition
Sharath Girish
Shishira R. Maiya
Kamal Gupta
Hao Chen
L. Davis
Abhinav Shrivastava
75
60
0
08 Dec 2020
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
The Lottery Ticket Hypothesis for Pre-trained BERT Networks
Tianlong Chen
Jonathan Frankle
Shiyu Chang
Sijia Liu
Yang Zhang
Zhangyang Wang
Michael Carbin
148
345
0
23 Jul 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
1