ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2112.13896
  4. Cited By
Two Sparsities Are Better Than One: Unlocking the Performance Benefits
  of Sparse-Sparse Networks

Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks

27 December 2021
Kevin Lee Hunter
Lawrence Spracklen
Subutai Ahmad
ArXivPDFHTML

Papers citing "Two Sparsities Are Better Than One: Unlocking the Performance Benefits of Sparse-Sparse Networks"

18 / 18 papers shown
Title
Self-Ablating Transformers: More Interpretability, Less Sparsity
Self-Ablating Transformers: More Interpretability, Less Sparsity
Jeremias Ferrao
Luhan Mikaelson
Keenan Pepper
Natalia Perez-Campanero Antolin
MILM
21
0
0
01 May 2025
Event-Based Eye Tracking. 2025 Event-based Vision Workshop
Event-Based Eye Tracking. 2025 Event-based Vision Workshop
Qinyu Chen
Chang Gao
Min Liu
Daniele Perrone
Yan Ru Pei
...
Hoang M. Truong
Vinh-Thuan Ly
Huy G. Tran
Thuan-Phat Nguyen
Tram T. Doan
35
0
0
25 Apr 2025
Weight Block Sparsity: Training, Compilation, and AI Engine Accelerators
Weight Block Sparsity: Training, Compilation, and AI Engine Accelerators
P. DÁlberto
Taehee Jeong
Akshai Jain
Shreyas Manjunath
Mrinal Sarmah
Samuel Hsu Yaswanth Raparti
Nitesh Pipralia
34
2
0
12 Jul 2024
Weight Sparsity Complements Activity Sparsity in Neuromorphic Language
  Models
Weight Sparsity Complements Activity Sparsity in Neuromorphic Language Models
Rishav Mukherji
Mark Schöne
Khaleelulla Khan Nazeer
Christian Mayr
David Kappel
Anand Subramoney
35
1
0
01 May 2024
Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training
Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training
Xi Chen
Chang Gao
Zuowen Wang
Longbiao Cheng
Sheng Zhou
Shih-Chii Liu
T. Delbruck
23
2
0
14 Dec 2023
Activity Sparsity Complements Weight Sparsity for Efficient RNN
  Inference
Activity Sparsity Complements Weight Sparsity for Efficient RNN Inference
Rishav Mukherji
Mark Schöne
Khaleelulla Khan Nazeer
Christian Mayr
Anand Subramoney
17
2
0
13 Nov 2023
SHARP: Sparsity and Hidden Activation RePlay for Neuro-Inspired
  Continual Learning
SHARP: Sparsity and Hidden Activation RePlay for Neuro-Inspired Continual Learning
Mustafa Burak Gurbuz
J. M. Moorman
Constantinos Dovrolis
CLL
18
0
0
29 May 2023
STen: Productive and Efficient Sparsity in PyTorch
STen: Productive and Efficient Sparsity in PyTorch
Andrei Ivanov
Nikoli Dryden
Tal Ben-Nun
Saleh Ashkboos
Torsten Hoefler
22
4
0
15 Apr 2023
A Study of Biologically Plausible Neural Network: The Role and
  Interactions of Brain-Inspired Mechanisms in Continual Learning
A Study of Biologically Plausible Neural Network: The Role and Interactions of Brain-Inspired Mechanisms in Continual Learning
F. Sarfraz
Elahe Arani
Bahram Zonooz
22
2
0
13 Apr 2023
Spatial Mixture-of-Experts
Spatial Mixture-of-Experts
Nikoli Dryden
Torsten Hoefler
MoE
13
9
0
24 Nov 2022
Unlocking the potential of two-point cells for energy-efficient and
  resilient training of deep nets
Unlocking the potential of two-point cells for energy-efficient and resilient training of deep nets
Ahsan Adeel
A. Adetomi
K. Ahmed
Amir Hussain
T. Arslan
William A. Phillips
64
12
0
24 Oct 2022
Bridging the Gap between Artificial Intelligence and Artificial General
  Intelligence: A Ten Commandment Framework for Human-Like Intelligence
Bridging the Gap between Artificial Intelligence and Artificial General Intelligence: A Ten Commandment Framework for Human-Like Intelligence
Ananta Nair
F. Kashani
18
2
0
17 Oct 2022
The Role Of Biology In Deep Learning
The Role Of Biology In Deep Learning
Robert Bain
20
0
0
07 Sep 2022
Efficient recurrent architectures through activity sparsity and sparse
  back-propagation through time
Efficient recurrent architectures through activity sparsity and sparse back-propagation through time
Anand Subramoney
Khaleelulla Khan Nazeer
Mark Schöne
Christian Mayr
David Kappel
25
16
0
13 Jun 2022
Training for temporal sparsity in deep neural networks, application in
  video processing
Training for temporal sparsity in deep neural networks, application in video processing
Amirreza Yousefzadeh
Manolis Sifalakis
6
3
0
15 Jul 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
136
679
0
31 Jan 2021
Activation Density based Mixed-Precision Quantization for Energy
  Efficient Neural Networks
Activation Density based Mixed-Precision Quantization for Energy Efficient Neural Networks
Karina Vasquez
Yeshwanth Venkatesha
Abhiroop Bhattacharjee
Abhishek Moitra
Priyadarshini Panda
MQ
27
15
0
12 Jan 2021
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Z. Tu
Kaiming He
261
10,106
0
16 Nov 2016
1