ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.08700
  4. Cited By
Lottery Jackpots Exist in Pre-trained Models
v1v2v3v4v5v6v7 (latest)

Lottery Jackpots Exist in Pre-trained Models

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
18 April 2021
Yuxin Zhang
Mingbao Lin
Yan Wang
Jiayi Ji
Rongrong Ji
ArXiv (abs)PDFHTMLHuggingFace (1 upvotes)Github (34★)

Papers citing "Lottery Jackpots Exist in Pre-trained Models"

9 / 9 papers shown
Kernelized Sparse Fine-Tuning with Bi-level Parameter Competition for Vision Models
Kernelized Sparse Fine-Tuning with Bi-level Parameter Competition for Vision Models
Shufan Shen
Junshu Sun
Shuhui Wang
Qingming Huang
191
0
0
28 Oct 2025
Collaborative Compression for Large-Scale MoE Deployment on Edge
Collaborative Compression for Large-Scale MoE Deployment on Edge
Yixiao Chen
Yanyue Xie
Ruining Yang
Wei Jiang
Wei Wang
Yong He
Yue Chen
Pu Zhao
Y. Wang
MQ
117
0
0
30 Sep 2025
A Convex-optimization-based Layer-wise Post-training Pruner for Large
  Language Models
A Convex-optimization-based Layer-wise Post-training Pruner for Large Language Models
Pengxiang Zhao
Hanyu Hu
Ping Li
Yi Zheng
Zhefeng Wang
Xiaoming Yuan
250
2
0
07 Aug 2024
BESA: Pruning Large Language Models with Blockwise Parameter-Efficient
  Sparsity Allocation
BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation
Peng Xu
Wenqi Shao
Mengzhao Chen
Shitao Tang
Kai-Chuang Zhang
Shiyang Feng
Fengwei An
Yu Qiao
Ping Luo
MoE
371
51
0
18 Feb 2024
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs
Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMsInternational Conference on Learning Representations (ICLR), 2023
Yuxin Zhang
Lirui Zhao
Mingbao Lin
Yunyun Sun
Yiwu Yao
Xingjia Han
Jared Tanner
Shiwei Liu
Rongrong Ji
SyDa
474
77
0
13 Oct 2023
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware
  Training
Exploiting the Partly Scratch-off Lottery Ticket for Quantization-Aware Training
Mingliang Xu
Gongrui Nan
Yuxin Zhang
Jiayi Ji
Rongrong Ji
MQ
349
3
0
12 Nov 2022
Parameter-Efficient Sparsity for Large Language Models Fine-Tuning
Parameter-Efficient Sparsity for Large Language Models Fine-TuningInternational Joint Conference on Artificial Intelligence (IJCAI), 2022
Yuchao Li
Fuli Luo
Chuanqi Tan
Mengdi Wang
Songfang Huang
Shen Li
Junjie Bai
MQ
228
40
0
23 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a SurveyArtificial Intelligence Review (Artif Intell Rev), 2022
Paul Wimmer
Jens Mehnert
Alexandru Paul Condurache
DD
390
36
0
17 May 2022
OptG: Optimizing Gradient-driven Criteria in Network Sparsity
OptG: Optimizing Gradient-driven Criteria in Network Sparsity
Yuxin Zhang
Mingbao Lin
Mengzhao Chen
Jiayi Ji
Rongrong Ji
514
7
0
30 Jan 2022
1
Page 1 of 1