ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1701.05369
  4. Cited By
Variational Dropout Sparsifies Deep Neural Networks
v1v2v3 (latest)

Variational Dropout Sparsifies Deep Neural Networks

International Conference on Machine Learning (ICML), 2017
19 January 2017
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
    BDL
ArXiv (abs)PDFHTML

Papers citing "Variational Dropout Sparsifies Deep Neural Networks"

50 / 481 papers shown
Pruning Convolutional Filters using Batch Bridgeout
Pruning Convolutional Filters using Batch BridgeoutIEEE Access (IEEE Access), 2020
Najeeb Khan
Ian Stavness
95
3
0
23 Sep 2020
A Progressive Sub-Network Searching Framework for Dynamic Inference
A Progressive Sub-Network Searching Framework for Dynamic InferenceIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Li Yang
Zhezhi He
Yu Cao
Deliang Fan
AI4CE
133
8
0
11 Sep 2020
FlipOut: Uncovering Redundant Weights via Sign Flipping
FlipOut: Uncovering Redundant Weights via Sign Flipping
A. Apostol
M. Stol
Patrick Forré
UQCV
67
1
0
05 Sep 2020
SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning
  Inference
SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning InferenceInternational Conference on Parallel Architectures and Compilation Techniques (PACT), 2020
Ziheng Wang
150
81
0
26 Aug 2020
HALO: Learning to Prune Neural Networks with Shrinkage
HALO: Learning to Prune Neural Networks with Shrinkage
Skyler Seto
M. Wells
Wenyu Zhang
344
0
0
24 Aug 2020
Training Sparse Neural Networks using Compressed Sensing
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
210
7
0
21 Aug 2020
Stable Low-rank Tensor Decomposition for Compression of Convolutional
  Neural Network
Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural NetworkEuropean Conference on Computer Vision (ECCV), 2020
Anh-Huy Phan
Konstantin Sobolev
Konstantin Sozykin
Dmitry Ermilov
Julia Gusak
P. Tichavský
Valeriy Glukhov
Ivan Oseledets
A. Cichocki
BDL
193
153
0
12 Aug 2020
Growing Efficient Deep Networks by Structured Continuous Sparsification
Growing Efficient Deep Networks by Structured Continuous SparsificationInternational Conference on Learning Representations (ICLR), 2020
Xin Yuan
Pedro H. P. Savarese
Michael Maire
3DPC
188
51
0
30 Jul 2020
MaxDropout: Deep Neural Network Regularization Based on Maximum Output
  Values
MaxDropout: Deep Neural Network Regularization Based on Maximum Output ValuesInternational Conference on Pattern Recognition (ICPR), 2020
C. F. G. Santos
Danilo Colombo
Mateus Roder
João Paulo Papa
102
19
0
27 Jul 2020
OccamNet: A Fast Neural Model for Symbolic Regression at Scale
OccamNet: A Fast Neural Model for Symbolic Regression at Scale
Owen Dugan
Rumen Dangovski
Allan dos Santos Costa
Samuel Kim
Pawan Goyal
J. Jacobson
M. Soljavcić
168
13
0
16 Jul 2020
VINNAS: Variational Inference-based Neural Network Architecture Search
VINNAS: Variational Inference-based Neural Network Architecture Search
Martin Ferianc
Hongxiang Fan
Miguel R. D. Rodrigues
3DPC
364
7
0
12 Jul 2020
Quantifying and Leveraging Predictive Uncertainty for Medical Image
  Assessment
Quantifying and Leveraging Predictive Uncertainty for Medical Image Assessment
Florin-Cristian Ghesu
Bogdan Georgescu
Awais Mansoor
Y. Yoo
Eli Gibson
...
Ramandeep Singh
S. Digumarthy
Mannudeep K. Kalra
Sasa Grbic
Dorin Comaniciu
UQCVEDL
122
64
0
08 Jul 2020
Operation-Aware Soft Channel Pruning using Differentiable Masks
Operation-Aware Soft Channel Pruning using Differentiable MasksInternational Conference on Machine Learning (ICML), 2020
Minsoo Kang
Bohyung Han
AAML
196
160
0
08 Jul 2020
Multi-Task Variational Information Bottleneck
Multi-Task Variational Information Bottleneck
Weizhu Qian
Bowei Chen
Yichao Zhang
Guanghui Wen
Franck Gechter
375
12
0
01 Jul 2020
ESPN: Extremely Sparse Pruned Networks
ESPN: Extremely Sparse Pruned Networks
Minsu Cho
Ameya Joshi
Chinmay Hegde
156
11
0
28 Jun 2020
Topological Insights into Sparse Neural Networks
Topological Insights into Sparse Neural Networks
Shiwei Liu
T. Lee
Anil Yaman
Zahra Atashgahi
David L. Ferraro
Ghada Sokar
Mykola Pechenizkiy
Decebal Constantin Mocanu
248
31
0
24 Jun 2020
Principal Component Networks: Parameter Reduction Early in Training
Principal Component Networks: Parameter Reduction Early in Training
R. Waleffe
Theodoros Rekatsinas
3DPC
146
10
0
23 Jun 2020
Revisiting Loss Modelling for Unstructured Pruning
Revisiting Loss Modelling for Unstructured Pruning
César Laurent
Camille Ballas
Thomas George
Nicolas Ballas
Pascal Vincent
148
16
0
22 Jun 2020
Sparse GPU Kernels for Deep Learning
Sparse GPU Kernels for Deep Learning
Trevor Gale
Matei A. Zaharia
C. Young
Erich Elsen
267
264
0
18 Jun 2020
Directional Pruning of Deep Neural Networks
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
254
35
0
16 Jun 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining
  Neural Networks
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Matthias Kirchler
UQCVFAtt
339
34
0
16 Jun 2020
Finding trainable sparse networks through Neural Tangent Transfer
Finding trainable sparse networks through Neural Tangent Transfer
Tianlin Liu
Friedemann Zenke
163
39
0
15 Jun 2020
AlgebraNets
AlgebraNets
Jordan Hoffmann
Simon Schmitt
Simon Osindero
Karen Simonyan
Erich Elsen
MoE
407
6
0
12 Jun 2020
Dynamic Model Pruning with Feedback
Dynamic Model Pruning with FeedbackInternational Conference on Learning Representations (ICLR), 2020
Tao Lin
Sebastian U. Stich
Luis Barba
Daniil Dmitriev
Martin Jaggi
264
224
0
12 Jun 2020
Convolutional neural networks compression with low rank and sparse
  tensor decompositions
Convolutional neural networks compression with low rank and sparse tensor decompositions
Pavel Kaloshin
115
1
0
11 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
130
5
0
08 Jun 2020
Uncertainty-Aware Deep Classifiers using Generative Models
Uncertainty-Aware Deep Classifiers using Generative Models
Murat Sensoy
Lance M. Kaplan
Federico Cerutti
Maryam Saleki
UQCVOOD
234
85
0
07 Jun 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
345
113
0
05 Jun 2020
Weight Pruning via Adaptive Sparsity Loss
Weight Pruning via Adaptive Sparsity Loss
George Retsinas
Athena Elafrou
G. Goumas
Petros Maragos
155
11
0
04 Jun 2020
Pruning via Iterative Ranking of Sensitivity Statistics
Pruning via Iterative Ranking of Sensitivity Statistics
Stijn Verdenius
M. Stol
Patrick Forré
AAML
170
42
0
01 Jun 2020
Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors
Efficient and Scalable Bayesian Neural Nets with Rank-1 Factors
Michael W. Dusenberry
Ghassen Jerfel
Yeming Wen
Yi-An Ma
Jasper Snoek
Katherine A. Heller
Balaji Lakshminarayanan
Dustin Tran
UQCVBDL
381
229
0
14 May 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
  Trainable Masked Layers
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
167
132
0
14 May 2020
Data-Free Network Quantization With Adversarial Knowledge Distillation
Data-Free Network Quantization With Adversarial Knowledge Distillation
Yoojin Choi
Jihwan P. Choi
Mostafa El-Khamy
Jungwon Lee
MQ
207
140
0
08 May 2020
Pruning artificial neural networks: a way to find well-generalizing,
  high-entropy sharp minima
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minimaInternational Conference on Artificial Neural Networks (ICANN), 2020
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
154
13
0
30 Apr 2020
Out-of-the-box channel pruned networks
Out-of-the-box channel pruned networks
Ragav Venkatesan
Gurumurthy Swaminathan
Xiong Zhou
Anna Luo
107
0
0
30 Apr 2020
WoodFisher: Efficient Second-Order Approximation for Neural Network
  Compression
WoodFisher: Efficient Second-Order Approximation for Neural Network Compression
Sidak Pal Singh
Dan Alistarh
267
29
0
29 Apr 2020
Training with Quantization Noise for Extreme Model Compression
Training with Quantization Noise for Extreme Model CompressionInternational Conference on Learning Representations (ICLR), 2020
Angela Fan
Pierre Stock
Benjamin Graham
Edouard Grave
Remi Gribonval
Edouard Grave
Armand Joulin
MQ
286
257
0
15 Apr 2020
Minimizing FLOPs to Learn Efficient Sparse Representations
Minimizing FLOPs to Learn Efficient Sparse RepresentationsInternational Conference on Learning Representations (ICLR), 2020
Biswajit Paria
Chih-Kuan Yeh
Ian En-Hsu Yen
N. Xu
Pradeep Ravikumar
Barnabás Póczós
161
79
0
12 Apr 2020
Learning Sparse & Ternary Neural Networks with Entropy-Constrained
  Trained Ternarization (EC2T)
Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)
Arturo Marbán
Daniel Becking
Simon Wiedemann
Wojciech Samek
MQ
161
14
0
02 Apr 2020
Information-Theoretic Probing with Minimum Description Length
Information-Theoretic Probing with Minimum Description LengthConference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Elena Voita
Ivan Titov
250
297
0
27 Mar 2020
Bayesian Sparsification Methods for Deep Complex-valued Networks
Bayesian Sparsification Methods for Deep Complex-valued Networks
Ivan Nazarov
Evgeny Burnaev
BDL
234
0
0
25 Mar 2020
Dynamic Narrowing of VAE Bottlenecks Using GECO and L0 Regularization
Dynamic Narrowing of VAE Bottlenecks Using GECO and L0 RegularizationIEEE International Joint Conference on Neural Network (IJCNN), 2020
Cedric De Boom
Samuel T. Wauthier
Tim Verbelen
Bart Dhoedt
DRL
145
8
0
24 Mar 2020
SASL: Saliency-Adaptive Sparsity Learning for Neural Network
  Acceleration
SASL: Saliency-Adaptive Sparsity Learning for Neural Network Acceleration
Jun Shi
Jianfeng Xu
K. Tasaka
Zhibo Chen
177
25
0
12 Mar 2020
Pruned Neural Networks are Surprisingly Modular
Pruned Neural Networks are Surprisingly Modular
Daniel Filan
Shlomi Hod
Cody Wild
Andrew Critch
Stuart J. Russell
294
8
0
10 Mar 2020
Trends and Advancements in Deep Neural Network Communication
Trends and Advancements in Deep Neural Network Communication
Felix Sattler
Thomas Wiegand
Wojciech Samek
GNN
156
9
0
06 Mar 2020
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?Conference on Machine Learning and Systems (MLSys), 2020
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
607
1,181
0
06 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Comparing Rewinding and Fine-tuning in Neural Network PruningInternational Conference on Learning Representations (ICLR), 2020
Alex Renda
Jonathan Frankle
Michael Carbin
537
421
0
05 Mar 2020
A Note on Latency Variability of Deep Neural Networks for Mobile
  Inference
A Note on Latency Variability of Deep Neural Networks for Mobile Inference
Luting Yang
Bingqian Lu
Shaolei Ren
107
6
0
29 Feb 2020
Learned Threshold Pruning
Learned Threshold Pruning
K. Azarian
Brandon Smart
Jinwon Lee
Tijmen Blankevoort
MQ
223
40
0
28 Feb 2020
Informative Bayesian Neural Network Priors for Weak Signals
Informative Bayesian Neural Network Priors for Weak SignalsBayesian Analysis (BA), 2020
Tianyu Cui
A. Havulinna
Pekka Marttinen
Samuel Kaski
190
10
0
24 Feb 2020
Previous
123...1056789
Next