ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2111.05685
  4. Cited By
Efficient Neural Network Training via Forward and Backward Propagation
  Sparsification

Efficient Neural Network Training via Forward and Backward Propagation Sparsification

10 November 2021
Xiao Zhou
Weizhong Zhang
Zonghao Chen
Shizhe Diao
Tong Zhang
ArXivPDFHTML

Papers citing "Efficient Neural Network Training via Forward and Backward Propagation Sparsification"

27 / 27 papers shown
Title
Advancing Weight and Channel Sparsification with Enhanced Saliency
Advancing Weight and Channel Sparsification with Enhanced Saliency
Xinglong Sun
Maying Shen
Hongxu Yin
Lei Mao
Pavlo Molchanov
Jose M. Alvarez
46
1
0
05 Feb 2025
ssProp: Energy-Efficient Training for Convolutional Neural Networks with Scheduled Sparse Back Propagation
ssProp: Energy-Efficient Training for Convolutional Neural Networks with Scheduled Sparse Back Propagation
Lujia Zhong
Shuo Huang
Yonggang Shi
49
0
0
31 Dec 2024
On Unsupervised Prompt Learning for Classification with Black-box
  Language Models
On Unsupervised Prompt Learning for Classification with Black-box Language Models
Zhen-Yu Zhang
Jiandong Zhang
Huaxiu Yao
Gang Niu
Masashi Sugiyama
21
2
0
04 Oct 2024
Convolutional Neural Network Compression Based on Low-Rank Decomposition
Convolutional Neural Network Compression Based on Low-Rank Decomposition
Yaping He
Linhao Jiang
Di Wu
31
0
0
29 Aug 2024
Mask in the Mirror: Implicit Sparsification
Mask in the Mirror: Implicit Sparsification
Tom Jacobs
R. Burkholz
42
3
0
19 Aug 2024
The Diversity Bonus: Learning from Dissimilar Distributed Clients in
  Personalized Federated Learning
The Diversity Bonus: Learning from Dissimilar Distributed Clients in Personalized Federated Learning
Xinghao Wu
Xuefeng Liu
Jianwei Niu
Guogang Zhu
Shaojie Tang
Xiaotian Li
Jiannong Cao
FedML
32
3
0
22 Jul 2024
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural
  Networks Using the Marginal Likelihood
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood
Rayen Dhahri
Alexander Immer
Bertrand Charpentier
Stephan Günnemann
Vincent Fortuin
BDL
UQCV
24
4
0
25 Feb 2024
Prospector Heads: Generalized Feature Attribution for Large Models &
  Data
Prospector Heads: Generalized Feature Attribution for Large Models & Data
Gautam Machiraju
Alexander Derry
Arjun D Desai
Neel Guha
Amir-Hossein Karimi
James Zou
Russ Altman
Christopher Ré
Parag Mallick
AI4TS
MedIm
43
0
0
18 Feb 2024
ELRT: Efficient Low-Rank Training for Compact Convolutional Neural
  Networks
ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks
Yang Sui
Miao Yin
Yu Gong
Jinqi Xiao
Huy Phan
Bo Yuan
10
5
0
18 Jan 2024
Black-Box Tuning of Vision-Language Models with Effective Gradient
  Approximation
Black-Box Tuning of Vision-Language Models with Effective Gradient Approximation
Zixian Guo
Yuxiang Wei
Ming-Yu Liu
Zhilong Ji
Jinfeng Bai
Yiwen Guo
Wangmeng Zuo
VLM
27
8
0
26 Dec 2023
Grounding Foundation Models through Federated Transfer Learning: A
  General Framework
Grounding Foundation Models through Federated Transfer Learning: A General Framework
Yan Kang
Tao Fan
Hanlin Gu
Xiaojin Zhang
Lixin Fan
Qiang Yang
AI4CE
68
19
0
29 Nov 2023
Training Large Language Models Efficiently with Sparsity and Dataflow
Training Large Language Models Efficiently with Sparsity and Dataflow
V. Srinivasan
Darshan Gandhi
Urmish Thakker
R. Prabhakar
MoE
30
6
0
11 Apr 2023
Automatic Prompt Augmentation and Selection with Chain-of-Thought from
  Labeled Data
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data
Kashun Shum
Shizhe Diao
Tong Zhang
ReLM
LRM
28
128
0
24 Feb 2023
SparseProp: Efficient Sparse Backpropagation for Faster Training of
  Neural Networks
SparseProp: Efficient Sparse Backpropagation for Faster Training of Neural Networks
Mahdi Nikdan
Tommaso Pegolotti
Eugenia Iofinova
Eldar Kurtic
Dan Alistarh
18
11
0
09 Feb 2023
Probabilistic Bilevel Coreset Selection
Probabilistic Bilevel Coreset Selection
Xiao Zhou
Renjie Pi
Weizhong Zhang
Yong Lin
Tong Zhang
NoLa
23
27
0
24 Jan 2023
Model Agnostic Sample Reweighting for Out-of-Distribution Learning
Model Agnostic Sample Reweighting for Out-of-Distribution Learning
Xiao Zhou
Yong Lin
Renjie Pi
Weizhong Zhang
Renzhe Xu
Peng Cui
Tong Zhang
OODD
28
60
0
24 Jan 2023
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient
  Correction
Balance is Essence: Accelerating Sparse Training via Adaptive Gradient Correction
Bowen Lei
Dongkuan Xu
Ruqi Zhang
Shuren He
Bani Mallick
27
6
0
09 Jan 2023
Robust Federated Learning against both Data Heterogeneity and Poisoning
  Attack via Aggregation Optimization
Robust Federated Learning against both Data Heterogeneity and Poisoning Attack via Aggregation Optimization
Yueqi Xie
Weizhong Zhang
Renjie Pi
Fangzhao Wu
Qifeng Chen
Xing Xie
Sunghun Kim
FedML
18
7
0
10 Nov 2022
Gradient-based Weight Density Balancing for Robust Dynamic Sparse
  Training
Gradient-based Weight Density Balancing for Robust Dynamic Sparse Training
Mathias Parger
Alexander Ertl
Paul Eibensteiner
J. H. Mueller
Martin Winter
M. Steinberger
31
0
0
25 Oct 2022
Why Random Pruning Is All We Need to Start Sparse
Why Random Pruning Is All We Need to Start Sparse
Advait Gadhikar
Sohom Mukherjee
R. Burkholz
41
19
0
05 Oct 2022
Write and Paint: Generative Vision-Language Models are Unified Modal
  Learners
Write and Paint: Generative Vision-Language Models are Unified Modal Learners
Shizhe Diao
Wangchunshu Zhou
Xinsong Zhang
Jiawei Wang
MLLM
AI4CE
14
16
0
15 Jun 2022
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models
Wangchunshu Zhou
Yan Zeng
Shizhe Diao
Xinsong Zhang
CoGe
VLM
26
13
0
30 May 2022
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep
  Neural Network, a Survey
Dimensionality Reduced Training by Pruning and Freezing Parts of a Deep Neural Network, a Survey
Paul Wimmer
Jens Mehnert
A. P. Condurache
DD
34
20
0
17 May 2022
Finding Dynamics Preserving Adversarial Winning Tickets
Finding Dynamics Preserving Adversarial Winning Tickets
Xupeng Shi
Pengfei Zheng
Adam Ding
Yuan Gao
Weizhong Zhang
AAML
21
1
0
14 Feb 2022
Black-box Prompt Learning for Pre-trained Language Models
Black-box Prompt Learning for Pre-trained Language Models
Shizhe Diao
Zhichao Huang
Ruijia Xu
Xuechun Li
Yong Lin
Xiao Zhou
Tong Zhang
VLM
AAML
28
68
0
21 Jan 2022
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
224
382
0
05 Mar 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,561
0
17 Apr 2017
1