ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1701.05369
  4. Cited By
Variational Dropout Sparsifies Deep Neural Networks
v1v2v3 (latest)

Variational Dropout Sparsifies Deep Neural Networks

International Conference on Machine Learning (ICML), 2017
19 January 2017
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
    BDL
ArXiv (abs)PDFHTML

Papers citing "Variational Dropout Sparsifies Deep Neural Networks"

50 / 481 papers shown
Title
Gradual Channel Pruning while Training using Feature Relevance Scores
  for Convolutional Neural Networks
Gradual Channel Pruning while Training using Feature Relevance Scores for Convolutional Neural NetworksIEEE Access (IEEE Access), 2020
Sai Aparna Aketi
Sourjya Roy
A. Raghunathan
Kaushik Roy
193
27
0
23 Feb 2020
Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
  Networks
Beyond Dropout: Feature Map Distortion to Regularize Deep Neural NetworksAAAI Conference on Artificial Intelligence (AAAI), 2020
Yehui Tang
Yunhe Wang
Yixing Xu
Boxin Shi
Chao Xu
Chunjing Xu
Chang Xu
206
40
0
23 Feb 2020
Neural Network Compression Framework for fast model inference
Neural Network Compression Framework for fast model inference
Alexander Kozlov
Ivan Lazarevich
Vasily Shamporov
N. Lyalyushkin
Yury Gorbachev
253
39
0
20 Feb 2020
Compressing BERT: Studying the Effects of Weight Pruning on Transfer
  Learning
Compressing BERT: Studying the Effects of Weight Pruning on Transfer LearningWorkshop on Representation Learning for NLP (RepL4NLP), 2020
Mitchell A. Gordon
Kevin Duh
Nicholas Andrews
VLM
255
362
0
19 Feb 2020
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
  Learning
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep LearningInternational Conference on Learning Representations (ICLR), 2020
Arsenii Ashukha
Alexander Lyzhov
Dmitry Molchanov
Dmitry Vetrov
UQCVFedML
463
342
0
15 Feb 2020
Calibrate and Prune: Improving Reliability of Lottery Tickets Through
  Prediction Calibration
Calibrate and Prune: Improving Reliability of Lottery Tickets Through Prediction Calibration
Bindya Venkatesh
Jayaraman J. Thiagarajan
Kowshik Thopalli
P. Sattigeri
209
14
0
10 Feb 2020
Soft Threshold Weight Reparameterization for Learnable Sparsity
Soft Threshold Weight Reparameterization for Learnable SparsityInternational Conference on Machine Learning (ICML), 2020
Aditya Kusupati
Vivek Ramanujan
Raghav Somani
Mitchell Wortsman
Prateek Jain
Sham Kakade
Ali Farhadi
569
261
0
08 Feb 2020
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Almost Sure Convergence of Dropout Algorithms for Neural Networks
Albert Senen-Cerda
J. Sanders
210
11
0
06 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Proving the Lottery Ticket Hypothesis: Pruning is All You NeedInternational Conference on Machine Learning (ICML), 2020
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
280
310
0
03 Feb 2020
An Equivalence between Bayesian Priors and Penalties in Variational
  Inference
An Equivalence between Bayesian Priors and Penalties in Variational Inference
Pierre Wolinski
Guillaume Charpiat
Yann Ollivier
BDL
149
1
0
01 Feb 2020
PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector
  Elimination
PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
Saurabh Goyal
Anamitra R. Choudhury
Saurabh ManishRaje
Venkatesan T. Chakaravarthy
Yogish Sabharwal
Ashish Verma
324
19
0
24 Jan 2020
Variational Dropout Sparsification for Particle Identification speed-up
Variational Dropout Sparsification for Particle Identification speed-up
Artem Sergeevich Ryzhikov
D. Derkach
M. Hushchyn
58
0
0
21 Jan 2020
A "Network Pruning Network" Approach to Deep Model Compression
A "Network Pruning Network" Approach to Deep Model CompressionIEEE Workshop/Winter Conference on Applications of Computer Vision (WACV), 2020
Vinay Kumar Verma
Pravendra Singh
Vinay P. Namboodiri
Piyush Rai
3DPCVLM
123
10
0
15 Jan 2020
Block-wise Dynamic Sparseness
Block-wise Dynamic SparsenessPattern Recognition Letters (Pattern Recognit. Lett.), 2020
Amir Hadifar
Johannes Deleu
Chris Develder
Thomas Demeester
112
3
0
14 Jan 2020
Campfire: Compressible, Regularization-Free, Structured Sparse Training
  for Hardware Accelerators
Campfire: Compressible, Regularization-Free, Structured Sparse Training for Hardware Accelerators
Noah Gamboa
Kais Kudrolli
Anand Dhoot
A. Pedram
126
11
0
09 Jan 2020
Resource-Efficient Neural Networks for Embedded Systems
Resource-Efficient Neural Networks for Embedded Systems
Wolfgang Roth
Günther Schindler
Lukas Pfeifenberger
Robert Peharz
Sebastian Tschiatschek
Holger Fröning
Franz Pernkopf
Zoubin Ghahramani
219
64
0
07 Jan 2020
Sparse Weight Activation Training
Sparse Weight Activation TrainingNeural Information Processing Systems (NeurIPS), 2020
Md Aamir Raihan
Tor M. Aamodt
294
77
0
07 Jan 2020
Landscape Connectivity and Dropout Stability of SGD Solutions for
  Over-parameterized Neural Networks
Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural NetworksInternational Conference on Machine Learning (ICML), 2019
Aleksandr Shevchenko
Marco Mondelli
366
41
0
20 Dec 2019
Taxonomy and Evaluation of Structured Compression of Convolutional
  Neural Networks
Taxonomy and Evaluation of Structured Compression of Convolutional Neural Networks
Andrey Kuzmin
Markus Nagel
Saurabh Pitre
Sandeep Pendyam
Tijmen Blankevoort
Max Welling
121
27
0
20 Dec 2019
$\ell_0$ Regularized Structured Sparsity Convolutional Neural Networks
ℓ0\ell_0ℓ0​ Regularized Structured Sparsity Convolutional Neural Networks
Kevin Bui
Fredrick Park
Shuai Zhang
Y. Qi
Jack Xin
82
0
0
17 Dec 2019
Integration of Neural Network-Based Symbolic Regression in Deep Learning
  for Scientific Discovery
Integration of Neural Network-Based Symbolic Regression in Deep Learning for Scientific DiscoveryIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2019
Samuel Kim
Peter Y. Lu
Srijon Mukherjee
M. Gilbert
Li Jing
V. Ceperic
Marin Soljacic
142
205
0
10 Dec 2019
Frivolous Units: Wider Networks Are Not Really That Wide
Frivolous Units: Wider Networks Are Not Really That WideAAAI Conference on Artificial Intelligence (AAAI), 2019
Stephen Casper
Xavier Boix
Vanessa D’Amario
Ling Guo
Martin Schrimpf
Kasper Vinken
Gabriel Kreiman
211
20
0
10 Dec 2019
Hierarchical Indian Buffet Neural Networks for Bayesian Continual
  Learning
Hierarchical Indian Buffet Neural Networks for Bayesian Continual LearningConference on Uncertainty in Artificial Intelligence (UAI), 2019
Samuel Kessler
Vu Nguyen
S. Zohren
Stephen J. Roberts
BDL
416
26
0
04 Dec 2019
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum
  Evaluation
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum EvaluationInternational Conference on Learning Representations (ICLR), 2019
Matthew Shunshi Zhang
Bradly C. Stadie
104
34
0
30 Nov 2019
Efficient Approximate Inference with Walsh-Hadamard Variational
  Inference
Efficient Approximate Inference with Walsh-Hadamard Variational Inference
Simone Rossi
Sébastien Marmin
Maurizio Filippone
BDL
128
1
0
29 Nov 2019
Semi-Relaxed Quantization with DropBits: Training Low-Bit Neural Networks via Bit-wise Regularization
J. H. Lee
Jihun Yun
Sung Ju Hwang
Eunho Yang
MQ
157
0
0
29 Nov 2019
A Novel Unsupervised Post-Processing Calibration Method for DNNS with
  Robustness to Domain Shift
A Novel Unsupervised Post-Processing Calibration Method for DNNS with Robustness to Domain Shift
A. Mozafari
H. Gomes
Christian Gagné
89
0
0
25 Nov 2019
Rigging the Lottery: Making All Tickets Winners
Rigging the Lottery: Making All Tickets WinnersInternational Conference on Machine Learning (ICML), 2019
Utku Evci
Trevor Gale
Jacob Menick
Pablo Samuel Castro
Erich Elsen
479
678
0
25 Nov 2019
Continual Learning with Adaptive Weights (CLAW)
Continual Learning with Adaptive Weights (CLAW)International Conference on Learning Representations (ICLR), 2019
T. Adel
Han Zhao
Richard Turner
CLL
164
79
0
21 Nov 2019
CUP: Cluster Pruning for Compressing Deep Neural Networks
CUP: Cluster Pruning for Compressing Deep Neural Networks
Rahul Duggal
Cao Xiao
R. Vuduc
Jimeng Sun
3DPCVLM
92
25
0
19 Nov 2019
A Discriminative Gaussian Mixture Model with Sparsity
A Discriminative Gaussian Mixture Model with SparsityInternational Conference on Learning Representations (ICLR), 2019
Hideaki Hayashi
S. Uchida
80
7
0
14 Nov 2019
Structured Sparsification of Gated Recurrent Neural Networks
Structured Sparsification of Gated Recurrent Neural NetworksAAAI Conference on Artificial Intelligence (AAAI), 2019
E. Lobacheva
Nadezhda Chirkova
Alexander Markovich
Dmitry Vetrov
208
3
0
13 Nov 2019
Learning Sparse Sharing Architectures for Multiple Tasks
Learning Sparse Sharing Architectures for Multiple TasksAAAI Conference on Artificial Intelligence (AAAI), 2019
Tianxiang Sun
Yunfan Shao
Xiaonan Li
Pengfei Liu
Hang Yan
Xipeng Qiu
Xuanjing Huang
MoE
211
150
0
12 Nov 2019
Variational Student: Learning Compact and Sparser Networks in Knowledge
  Distillation Framework
Variational Student: Learning Compact and Sparser Networks in Knowledge Distillation FrameworkIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2019
Srinidhi Hegde
Ranjitha Prasad
R. Hebbalaguppe
Vishwajith Kumar
111
20
0
26 Oct 2019
Reduced-Order Modeling of Deep Neural Networks
Reduced-Order Modeling of Deep Neural NetworksComputational Mathematics and Mathematical Physics (CMMP), 2019
Julia Gusak
Talgat Daulbaev
E. Ponomarev
A. Cichocki
Ivan Oseledets
BDLAI4CE
264
8
0
15 Oct 2019
State of Compact Architecture Search For Deep Neural Networks
State of Compact Architecture Search For Deep Neural Networks
M. Shafiee
Andrew Hryniowski
Francis Li
Z. Q. Lin
A. Wong
101
3
0
15 Oct 2019
If dropout limits trainable depth, does critical initialisation still
  matter? A large-scale statistical analysis on ReLU networks
If dropout limits trainable depth, does critical initialisation still matter? A large-scale statistical analysis on ReLU networksPattern Recognition Letters (PR), 2019
Arnu Pretorius
Elan Van Biljon
Benjamin van Niekerk
Ryan Eloff
Matthew Reynard
Steven D. James
Benjamin Rosman
Herman Kamper
Steve Kroon
74
2
0
13 Oct 2019
Information Aware Max-Norm Dirichlet Networks for Predictive Uncertainty
  Estimation
Information Aware Max-Norm Dirichlet Networks for Predictive Uncertainty Estimation
Theodoros Tsiligkaridis
UQCVBDL
199
9
0
10 Oct 2019
Structured Pruning of Large Language Models
Structured Pruning of Large Language ModelsConference on Empirical Methods in Natural Language Processing (EMNLP), 2019
Ziheng Wang
Jeremy Wohlwend
Tao Lei
234
322
0
10 Oct 2019
Deep Evidential Regression
Deep Evidential RegressionNeural Information Processing Systems (NeurIPS), 2019
Alexander Amini
Wilko Schwarting
A. Soleimany
Daniela Rus
EDLPERBDLUDUQCV
290
534
0
07 Oct 2019
Class-dependent Compression of Deep Neural Networks
Class-dependent Compression of Deep Neural Networks
R. Entezari
O. Saukh
190
7
0
23 Sep 2019
Defeating Misclassification Attacks Against Transfer Learning
Defeating Misclassification Attacks Against Transfer LearningIEEE Transactions on Dependable and Secure Computing (TDSC), 2019
Bang Wu
Shuo Wang
Lizhen Qu
Cong Wang
Carsten Rudolph
Xiangwen Yang
AAML
123
7
0
29 Aug 2019
Image Captioning with Sparse Recurrent Neural Network
Image Captioning with Sparse Recurrent Neural Network
J. Tan
Chee Seng Chan
Joon Huang Chuah
VLM
141
7
0
28 Aug 2019
DeepHoyer: Learning Sparser Neural Network with Differentiable
  Scale-Invariant Sparsity Measures
DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity MeasuresInternational Conference on Learning Representations (ICLR), 2019
Huanrui Yang
W. Wen
Xue Yang
230
106
0
27 Aug 2019
Neural Plasticity Networks
Neural Plasticity NetworksIEEE International Joint Conference on Neural Network (IJCNN), 2019
Yongqian Li
Shihao Ji
104
1
0
13 Aug 2019
Adversarial Neural Pruning with Latent Vulnerability Suppression
Adversarial Neural Pruning with Latent Vulnerability Suppression
Divyam Madaan
Jinwoo Shin
Sung Ju Hwang
AAML
203
3
0
12 Aug 2019
Group Pruning using a Bounded-Lp norm for Group Gating and
  Regularization
Group Pruning using a Bounded-Lp norm for Group Gating and RegularizationGerman Conference on Pattern Recognition (DAGM), 2019
Chaithanya Kumar Mummadi
Tim Genewein
Dan Zhang
Thomas Brox
Volker Fischer
106
4
0
09 Aug 2019
Exploiting Channel Similarity for Accelerating Deep Convolutional Neural
  Networks
Exploiting Channel Similarity for Accelerating Deep Convolutional Neural Networks
Yunxiang Zhang
Chenglong Zhao
Bingbing Ni
Jian Zhang
Haoran Deng
95
2
0
06 Aug 2019
Distilling Knowledge From a Deep Pose Regressor Network
Distilling Knowledge From a Deep Pose Regressor NetworkIEEE International Conference on Computer Vision (ICCV), 2019
Muhamad Risqi U. Saputra
Pedro Porto Buarque de Gusmão
Yasin Almalioglu
Andrew Markham
A. Trigoni
180
111
0
02 Aug 2019
Uncertainty Quantification in Deep Learning for Safer Neuroimage
  Enhancement
Uncertainty Quantification in Deep Learning for Safer Neuroimage Enhancement
Ryutaro Tanno
Daniel E. Worrall
Enrico Kaden
Aurobrata Ghosh
Francesco Grussu
A. Bizzi
S. Sotiropoulos
A. Criminisi
Daniel C. Alexander
MedImDiffM
205
35
0
31 Jul 2019
Previous
123...106789
Next