Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1701.05369
Cited By
Variational Dropout Sparsifies Deep Neural Networks
19 January 2017
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
BDL
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Variational Dropout Sparsifies Deep Neural Networks"
50 / 126 papers shown
Title
Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma Distributions
Huan Ma
Zongbo Han
Changqing Zhang
H. Fu
Joey Tianyi Zhou
Q. Hu
EDL
UQCV
66
42
0
11 Nov 2021
MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge
Geng Yuan
Xiaolong Ma
Wei Niu
Zhengang Li
Zhenglun Kong
...
Minghai Qin
Bin Ren
Yanzhi Wang
Sijia Liu
Xue Lin
15
89
0
26 Oct 2021
Probabilistic fine-tuning of pruning masks and PAC-Bayes self-bounded learning
Soufiane Hayou
Bo He
Gintare Karolina Dziugaite
13
2
0
22 Oct 2021
Joint Channel and Weight Pruning for Model Acceleration on Moblie Devices
Tianli Zhao
Xi Sheryl Zhang
Wentao Zhu
Jiaxing Wang
Sen Yang
Ji Liu
Jian Cheng
43
2
0
15 Oct 2021
Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks
Shuai Zhang
Meng Wang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
UQCV
MLT
18
13
0
12 Oct 2021
A study of the robustness of raw waveform based speaker embeddings under mismatched conditions
Ge Zhu
Frank Cwitkowitz
Z. Duan
22
2
0
08 Oct 2021
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
87
54
0
01 Oct 2021
Neural network relief: a pruning algorithm based on neural activity
Aleksandr Dekhovich
David Tax
M. Sluiter
Miguel A. Bessa
40
10
0
22 Sep 2021
Layer-wise Model Pruning based on Mutual Information
Chun Fan
Jiwei Li
Xiang Ao
Fei Wu
Yuxian Meng
Xiaofei Sun
38
19
0
28 Aug 2021
Learning Sparse Analytic Filters for Piano Transcription
Frank Cwitkowitz
M. Heydari
Z. Duan
25
2
0
23 Aug 2021
Explaining Bayesian Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Adelaida Creosteanu
Klaus-Robert Muller
Frederick Klauschen
Shinichi Nakajima
Marius Kloft
BDL
AAML
26
25
0
23 Aug 2021
Differentiable Subset Pruning of Transformer Heads
Jiaoda Li
Ryan Cotterell
Mrinmaya Sachan
37
53
0
10 Aug 2021
R-Drop: Regularized Dropout for Neural Networks
Xiaobo Liang
Lijun Wu
Juntao Li
Yue Wang
Qi Meng
Tao Qin
Wei Chen
M. Zhang
Tie-Yan Liu
31
424
0
28 Jun 2021
Sparse Training via Boosting Pruning Plasticity with Neuroregeneration
Shiwei Liu
Tianlong Chen
Xiaohan Chen
Zahra Atashgahi
Lu Yin
Huanyu Kou
Li Shen
Mykola Pechenizkiy
Zhangyang Wang
D. Mocanu
29
111
0
19 Jun 2021
1xN Pattern for Pruning Convolutional Neural Networks
Mingbao Lin
Yu-xin Zhang
Yuchao Li
Bohong Chen
Fei Chao
Mengdi Wang
Shen Li
Yonghong Tian
Rongrong Ji
3DPC
25
40
0
31 May 2021
Spectral Pruning for Recurrent Neural Networks
Takashi Furuya
Kazuma Suetake
K. Taniguchi
Hiroyuki Kusumoto
Ryuji Saiin
Tomohiro Daimon
22
4
0
23 May 2021
Effective Sparsification of Neural Networks with Global Sparsity Constraint
Xiao Zhou
Weizhong Zhang
Hang Xu
Tong Zhang
11
61
0
03 May 2021
Lottery Jackpots Exist in Pre-trained Models
Yu-xin Zhang
Mingbao Lin
Yan Wang
Fei Chao
Rongrong Ji
30
15
0
18 Apr 2021
Contextual Dropout: An Efficient Sample-Dependent Dropout Module
Xinjie Fan
Shujian Zhang
Korawat Tanwisuth
Xiaoning Qian
Mingyuan Zhou
OOD
BDL
UQCV
22
27
0
06 Mar 2021
LocalDrop: A Hybrid Regularization for Deep Neural Networks
Ziqing Lu
Chang Xu
Bo Du
Takashi Ishida
L. Zhang
Masashi Sugiyama
17
14
0
01 Mar 2021
An Information-Theoretic Justification for Model Pruning
Berivan Isik
Tsachy Weissman
Albert No
84
35
0
16 Feb 2021
Learning Task-Oriented Communication for Edge Inference: An Information Bottleneck Approach
Jiawei Shao
Yuyi Mao
Jun Zhang
32
212
0
08 Feb 2021
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
A. Fiandrotti
Marco Grangetto
38
18
0
07 Feb 2021
DiffPrune: Neural Network Pruning with Deterministic Approximate Binary Gates and
L
0
L_0
L
0
Regularization
Yaniv Shulman
38
3
0
07 Dec 2020
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
42
116
0
25 Nov 2020
Generalized Variational Continual Learning
Noel Loo
S. Swaroop
Richard E. Turner
BDL
CLL
28
58
0
24 Nov 2020
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
23
25
0
20 Nov 2020
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Lorenzo Valerio
F. M. Nardini
A. Passarella
R. Perego
12
12
0
17 Nov 2020
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks
Enzo Tartaglione
Andrea Bragagnolo
A. Fiandrotti
Marco Grangetto
ODL
UQCV
11
34
0
16 Nov 2020
Dirichlet Pruning for Neural Network Compression
Kamil Adamczewski
Mijung Park
24
3
0
10 Nov 2020
Failure Prediction by Confidence Estimation of Uncertainty-Aware Dirichlet Networks
Theodoros Tsiligkaridis
UQCV
22
7
0
19 Oct 2020
Pruning Convolutional Filters using Batch Bridgeout
Najeeb Khan
Ian Stavness
19
3
0
23 Sep 2020
Training Sparse Neural Networks using Compressed Sensing
Jonathan W. Siegel
Jianhong Chen
Pengchuan Zhang
Jinchao Xu
21
5
0
21 Aug 2020
Stable Low-rank Tensor Decomposition for Compression of Convolutional Neural Network
Anh-Huy Phan
Konstantin Sobolev
Konstantin Sozykin
Dmitry Ermilov
Julia Gusak
P. Tichavský
Valeriy Glukhov
Ivan V. Oseledets
A. Cichocki
BDL
19
128
0
12 Aug 2020
Operation-Aware Soft Channel Pruning using Differentiable Masks
Minsoo Kang
Bohyung Han
AAML
20
138
0
08 Jul 2020
Revisiting Loss Modelling for Unstructured Pruning
César Laurent
Camille Ballas
Thomas George
Nicolas Ballas
Pascal Vincent
15
14
0
22 Jun 2020
Directional Pruning of Deep Neural Networks
Shih-Kang Chao
Zhanyu Wang
Yue Xing
Guang Cheng
ODL
6
33
0
16 Jun 2020
How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks
Kirill Bykov
Marina M.-C. Höhne
Klaus-Robert Muller
Shinichi Nakajima
Marius Kloft
UQCV
FAtt
11
31
0
16 Jun 2020
A Framework for Neural Network Pruning Using Gibbs Distributions
Alex Labach
S. Valaee
9
5
0
08 Jun 2020
An Overview of Neural Network Compression
James OÑeill
AI4CE
40
98
0
05 Jun 2020
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
Junjie Liu
Zhe Xu
Runbin Shi
R. Cheung
Hayden Kwok-Hay So
9
119
0
14 May 2020
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
13
11
0
30 Apr 2020
Information-Theoretic Probing with Minimum Description Length
Elena Voita
Ivan Titov
19
269
0
27 Mar 2020
Comparing Rewinding and Fine-tuning in Neural Network Pruning
Alex Renda
Jonathan Frankle
Michael Carbin
222
382
0
05 Mar 2020
Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks
Yehui Tang
Yunhe Wang
Yixing Xu
Boxin Shi
Chao Xu
Chunjing Xu
Chang Xu
6
38
0
23 Feb 2020
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning
Arsenii Ashukha
Alexander Lyzhov
Dmitry Molchanov
Dmitry Vetrov
UQCV
FedML
17
314
0
15 Feb 2020
Proving the Lottery Ticket Hypothesis: Pruning is All You Need
Eran Malach
Gilad Yehudai
Shai Shalev-Shwartz
Ohad Shamir
48
271
0
03 Feb 2020
Sparse Weight Activation Training
Md Aamir Raihan
Tor M. Aamodt
32
72
0
07 Jan 2020
Learning Sparse Sharing Architectures for Multiple Tasks
Tianxiang Sun
Yunfan Shao
Xiaonan Li
Pengfei Liu
Hang Yan
Xipeng Qiu
Xuanjing Huang
MoE
19
128
0
12 Nov 2019
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
Simon Wiedemann
H. Kirchhoffer
Stefan Matlage
Paul Haase
Arturo Marbán
...
Ahmed Osman
D. Marpe
H. Schwarz
Thomas Wiegand
Wojciech Samek
38
92
0
27 Jul 2019
Previous
1
2
3
Next