ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1701.05369
  4. Cited By
Variational Dropout Sparsifies Deep Neural Networks
v1v2v3 (latest)

Variational Dropout Sparsifies Deep Neural Networks

International Conference on Machine Learning (ICML), 2017
19 January 2017
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
    BDL
ArXiv (abs)PDFHTML

Papers citing "Variational Dropout Sparsifies Deep Neural Networks"

50 / 481 papers shown
Title
1xN Pattern for Pruning Convolutional Neural Networks
1xN Pattern for Pruning Convolutional Neural NetworksIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Mingbao Lin
Yu-xin Zhang
Yuchao Li
Bohong Chen
Jiayi Ji
Mengdi Wang
Shen Li
Yonghong Tian
Rongrong Ji
3DPC
241
50
0
31 May 2021
Spectral Pruning for Recurrent Neural Networks
Spectral Pruning for Recurrent Neural NetworksInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
147
4
0
23 May 2021
Neural 3D Scene Compression via Model Compression
Neural 3D Scene Compression via Model Compression
Berivan Isik
298
10
0
07 May 2021
Modulating Regularization Frequency for Efficient Compression-Aware
  Model Training
Modulating Regularization Frequency for Efficient Compression-Aware Model Training
Dongsoo Lee
S. Kwon
Byeongwook Kim
Jeongin Yun
Baeseong Park
Yongkweon Jeon
101
0
0
05 May 2021
Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model
  Compression
Encoding Weights of Irregular Sparsity for Fixed-to-Fixed Model CompressionInternational Conference on Learning Representations (ICLR), 2021
Baeseong Park
S. Kwon
Daehwan Oh
Byeongwook Kim
Dongsoo Lee
178
4
0
05 May 2021
Effective Sparsification of Neural Networks with Global Sparsity
  Constraint
Effective Sparsification of Neural Networks with Global Sparsity ConstraintComputer Vision and Pattern Recognition (CVPR), 2021
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
247
75
0
03 May 2021
What Are Bayesian Neural Network Posteriors Really Like?
What Are Bayesian Neural Network Posteriors Really Like?International Conference on Machine Learning (ICML), 2021
Pavel Izmailov
Sharad Vikram
Matthew D. Hoffman
A. Wilson
UQCVBDL
300
434
0
29 Apr 2021
Lottery Jackpots Exist in Pre-trained Models
Lottery Jackpots Exist in Pre-trained ModelsIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Yuxin Zhang
Mingbao Lin
Yan Wang
Jiayi Ji
Rongrong Ji
343
18
0
18 Apr 2021
Robust Classification from Noisy Labels: Integrating Additional
  Knowledge for Chest Radiography Abnormality Assessment
Robust Classification from Noisy Labels: Integrating Additional Knowledge for Chest Radiography Abnormality Assessment
Sebastian Gündel
A. Setio
Florin-Cristian Ghesu
Sasa Grbic
Bogdan Georgescu
Andreas Maier
Dorin Comaniciu
NoLa
212
30
0
12 Apr 2021
Not All Attention Is All You Need
Not All Attention Is All You Need
Hongqiu Wu
Hai Zhao
Min Zhang
231
10
0
10 Apr 2021
Efficacy of Bayesian Neural Networks in Active Learning
Efficacy of Bayesian Neural Networks in Active Learning
Vineeth Rakesh
Swayambhoo Jain
BDL
104
10
0
02 Apr 2021
Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls
  for Network Pruning
Cascade Weight Shedding in Deep Neural Networks: Benefits and Pitfalls for Network Pruning
K. Azarian
Fatih Porikli
CVBM
71
0
0
19 Mar 2021
Contextual Dropout: An Efficient Sample-Dependent Dropout Module
Contextual Dropout: An Efficient Sample-Dependent Dropout ModuleInternational Conference on Learning Representations (ICLR), 2021
Xinjie Fan
Shujian Zhang
Korawat Tanwisuth
Xiaoning Qian
Mingyuan Zhou
OODBDLUQCV
160
31
0
06 Mar 2021
LocalDrop: A Hybrid Regularization for Deep Neural Networks
LocalDrop: A Hybrid Regularization for Deep Neural NetworksIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Ziqing Lu
Chang Xu
Bo Du
Takashi Ishida
Guang Dai
Masashi Sugiyama
177
17
0
01 Mar 2021
An Information-Theoretic Justification for Model Pruning
An Information-Theoretic Justification for Model PruningInternational Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Berivan Isik
Tsachy Weissman
Albert No
294
39
0
16 Feb 2021
Structured Dropout Variational Inference for Bayesian Neural Networks
Structured Dropout Variational Inference for Bayesian Neural NetworksNeural Information Processing Systems (NeurIPS), 2021
S. Nguyen
Duong Nguyen
Khai Nguyen
Khoat Than
Hung Bui
Nhat Ho
BDLDRL
249
10
0
16 Feb 2021
Bayesian Neural Network Priors Revisited
Bayesian Neural Network Priors RevisitedInternational Conference on Learning Representations (ICLR), 2021
Vincent Fortuin
Adrià Garriga-Alonso
Sebastian W. Ober
F. Wenzel
Gunnar Rätsch
Richard Turner
Mark van der Wilk
Laurence Aitchison
BDLUQCV
333
154
0
12 Feb 2021
Learning Task-Oriented Communication for Edge Inference: An Information
  Bottleneck Approach
Learning Task-Oriented Communication for Edge Inference: An Information Bottleneck ApproachIEEE Journal on Selected Areas in Communications (JSAC), 2021
Jiawei Shao
Yuyi Mao
Jun Zhang
219
269
0
08 Feb 2021
Extracting the Auditory Attention in a Dual-Speaker Scenario from EEG
  using a Joint CNN-LSTM Model
Extracting the Auditory Attention in a Dual-Speaker Scenario from EEG using a Joint CNN-LSTM ModelFrontiers in Physiology (Front. Physiol.), 2021
Ivine Kuruvila
J. Muncke
Eghart Fischer
U. Hoppe
77
31
0
08 Feb 2021
SeReNe: Sensitivity based Regularization of Neurons for Structured
  Sparsity in Neural Networks
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural NetworksIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2021
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
Attilio Fiandrotti
Marco Grangetto
157
22
0
07 Feb 2021
Deep Model Compression based on the Training History
Deep Model Compression based on the Training HistoryNeurocomputing (Neurocomputing), 2021
S. H. Shabbeer Basha
M. Farazuddin
Viswanath Pulabaigari
S. Dubey
Snehasis Mukherjee
VLM
243
25
0
30 Jan 2021
Variational Nested Dropout
Variational Nested DropoutIEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2021
Yufei Cui
Yushun Mao
Ziquan Liu
Qiao Li
Antoni B. Chan
Xue Liu
Tei-Wei Kuo
Chun Jason Xue
BDL
122
5
0
27 Jan 2021
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A SurveyNeurocomputing (Neurocomputing), 2021
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
460
841
0
24 Jan 2021
Non-Convex Compressed Sensing with Training Data
Non-Convex Compressed Sensing with Training Data
G. Welper
173
1
0
20 Jan 2021
SparseDNN: Fast Sparse Deep Learning Inference on CPUs
SparseDNN: Fast Sparse Deep Learning Inference on CPUs
Ziheng Wang
MQ
285
21
0
20 Jan 2021
Rescaling CNN through Learnable Repetition of Network Parameters
Rescaling CNN through Learnable Repetition of Network ParametersInternational Conference on Information Photonics (ICIP), 2021
Arnav Chavan
Udbhav Bamba
Rishabh Tiwari
D. K. Gupta
98
0
0
14 Jan 2021
B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic
  Meta-Learning
B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic Meta-LearningIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2021
Anish Madan
Ranjitha Prasad
BDL
78
3
0
01 Jan 2021
Enabling Retrain-free Deep Neural Network Pruning using Surrogate
  Lagrangian Relaxation
Enabling Retrain-free Deep Neural Network Pruning using Surrogate Lagrangian Relaxation
Deniz Gurevin
Shangli Zhou
Lynn Pepin
Bingbing Li
Mikhail A. Bragin
Caiwen Ding
Fei Miao
107
3
0
18 Dec 2020
Neural Pruning via Growing Regularization
Neural Pruning via Growing RegularizationInternational Conference on Learning Representations (ICLR), 2020
Huan Wang
Can Qin
Yulun Zhang
Y. Fu
251
180
0
16 Dec 2020
The Role of Regularization in Shaping Weight and Node Pruning Dependency
  and Dynamics
The Role of Regularization in Shaping Weight and Node Pruning Dependency and Dynamics
Yael Ben-Guigui
Jacob Goldberger
Tammy Riklin-Raviv
150
0
0
07 Dec 2020
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary
  Gates and $L_0$ Regularization
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary Gates and L0L_0L0​ Regularization
Yaniv Shulman
270
4
0
07 Dec 2020
Semi-Supervised Learning with Variational Bayesian Inference and Maximum
  Uncertainty Regularization
Semi-Supervised Learning with Variational Bayesian Inference and Maximum Uncertainty RegularizationAAAI Conference on Artificial Intelligence (AAAI), 2020
Kien Do
T. Tran
Svetha Venkatesh
BDL
193
5
0
03 Dec 2020
Asymptotic convergence rate of Dropout on shallow linear neural networks
Asymptotic convergence rate of Dropout on shallow linear neural networksMeasurement and Modeling of Computer Systems (SIGMETRICS), 2020
Albert Senen-Cerda
J. Sanders
192
9
0
01 Dec 2020
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's PerspectiveNeurocomputing (Neurocomputing), 2020
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
226
151
0
25 Nov 2020
Generalized Variational Continual Learning
Generalized Variational Continual LearningInternational Conference on Learning Representations (ICLR), 2020
Noel Loo
S. Swaroop
Richard Turner
BDLCLL
180
70
0
24 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network PruningJournal of Imaging (JI), 2020
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
275
29
0
20 Nov 2020
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Dynamic Hard Pruning of Neural Networks at the Edge of the InternetJournal of Network and Computer Applications (JNCA), 2020
Lorenzo Valerio
F. M. Nardini
A. Passarella
R. Perego
187
16
0
17 Nov 2020
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural
  networks
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networksNeural Networks (NN), 2020
Enzo Tartaglione
Andrea Bragagnolo
Attilio Fiandrotti
Marco Grangetto
ODLUQCV
226
35
0
16 Nov 2020
Efficient Variational Inference for Sparse Deep Learning with
  Theoretical Guarantee
Efficient Variational Inference for Sparse Deep Learning with Theoretical GuaranteeNeural Information Processing Systems (NeurIPS), 2020
Jincheng Bai
Qifan Song
Guang Cheng
BDL
148
48
0
15 Nov 2020
Dirichlet Pruning for Neural Network Compression
Dirichlet Pruning for Neural Network Compression
Kamil Adamczewski
Mijung Park
158
5
0
10 Nov 2020
Sparse within Sparse Gaussian Processes using Neighbor Information
Sparse within Sparse Gaussian Processes using Neighbor Information
Gia-Lac Tran
Dimitrios Milios
Pietro Michiardi
Maurizio Filippone
313
19
0
10 Nov 2020
Observation Space Matters: Benchmark and Optimization Algorithm
Observation Space Matters: Benchmark and Optimization AlgorithmIEEE International Conference on Robotics and Automation (ICRA), 2020
J. Kim
Sehoon Ha
OODOffRL
187
12
0
02 Nov 2020
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of
  Winning Tickets is Enough
Greedy Optimization Provably Wins the Lottery: Logarithmic Number of Winning Tickets is EnoughNeural Information Processing Systems (NeurIPS), 2020
Mao Ye
Lemeng Wu
Qiang Liu
127
17
0
29 Oct 2020
On Convergence and Generalization of Dropout Training
On Convergence and Generalization of Dropout TrainingNeural Information Processing Systems (NeurIPS), 2020
Poorya Mianjy
R. Arora
209
33
0
23 Oct 2020
Failure Prediction by Confidence Estimation of Uncertainty-Aware
  Dirichlet Networks
Failure Prediction by Confidence Estimation of Uncertainty-Aware Dirichlet NetworksIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020
Theodoros Tsiligkaridis
UQCV
93
8
0
19 Oct 2020
Layer-adaptive sparsity for the Magnitude-based Pruning
Layer-adaptive sparsity for the Magnitude-based Pruning
Jaeho Lee
Sejun Park
Sangwoo Mo
SungSoo Ahn
Jinwoo Shin
224
288
0
15 Oct 2020
Improve the Robustness and Accuracy of Deep Neural Network with
  $L_{2,\infty}$ Normalization
Improve the Robustness and Accuracy of Deep Neural Network with L2,∞L_{2,\infty}L2,∞​ Normalization
Lijia Yu
Xiao-Shan Gao
37
0
0
10 Oct 2020
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Utku Evci
Yani Andrew Ioannou
Cem Keskin
Yann N. Dauphin
190
100
0
07 Oct 2020
A Survey on Deep Neural Network Compression: Challenges, Overview, and
  Solutions
A Survey on Deep Neural Network Compression: Challenges, Overview, and Solutions
Rahul Mishra
Hari Prabhat Gupta
Tanima Dutta
125
102
0
05 Oct 2020
PipeTune: Pipeline Parallelism of Hyper and System Parameters Tuning for
  Deep Learning Clusters
PipeTune: Pipeline Parallelism of Hyper and System Parameters Tuning for Deep Learning ClustersInternational Middleware Conference (Middleware), 2020
Isabelly Rocha
Nathaniel Morris
L. Chen
Pascal Felber
Robert Birke
V. Schiavoni
177
11
0
01 Oct 2020
Previous
123456...8910
Next