ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.04008
  4. Cited By
Soft Weight-Sharing for Neural Network Compression

Soft Weight-Sharing for Neural Network Compression

13 February 2017
Karen Ullrich
Edward Meeds
Max Welling
ArXivPDFHTML

Papers citing "Soft Weight-Sharing for Neural Network Compression"

46 / 46 papers shown
Title
Pruning-Based TinyML Optimization of Machine Learning Models for Anomaly Detection in Electric Vehicle Charging Infrastructure
Pruning-Based TinyML Optimization of Machine Learning Models for Anomaly Detection in Electric Vehicle Charging Infrastructure
Fatemeh Dehrouyeh
I. Shaer
S. Nikan
F. Badrkhani Ajaei
Abdallah Shami
55
0
0
19 Mar 2025
GenAINet: Enabling Wireless Collective Intelligence via Knowledge Transfer and Reasoning
GenAINet: Enabling Wireless Collective Intelligence via Knowledge Transfer and Reasoning
Han Zou
Qiyang Zhao
Lina Bariah
Yu Tian
M. Bennis
S. Lasaulce
91
12
0
26 Feb 2024
eDKM: An Efficient and Accurate Train-time Weight Clustering for Large
  Language Models
eDKM: An Efficient and Accurate Train-time Weight Clustering for Large Language Models
Minsik Cho
Keivan Alizadeh Vahid
Qichen Fu
Saurabh N. Adya
C. C. D. Mundo
Mohammad Rastegari
Devang Naik
Peter Zatloukal
MQ
21
6
0
02 Sep 2023
Efficient and Effective Methods for Mixed Precision Neural Network
  Quantization for Faster, Energy-efficient Inference
Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Deepika Bablani
J. McKinstry
S. K. Esser
R. Appuswamy
D. Modha
MQ
8
4
0
30 Jan 2023
Scaling Deep Networks with the Mesh Adaptive Direct Search algorithm
Scaling Deep Networks with the Mesh Adaptive Direct Search algorithm
Dounia Lakhmiri
Mahdi Zolnouri
V. Nia
C. Tribes
Sébastien Le Digabel
20
0
0
17 Jan 2023
Novel transfer learning schemes based on Siamese networks and synthetic
  data
Novel transfer learning schemes based on Siamese networks and synthetic data
Dominik Stallmann
Philip Kenneweg
Barbara Hammer
18
6
0
21 Nov 2022
Fast and Low-Memory Deep Neural Networks Using Binary Matrix
  Factorization
Fast and Low-Memory Deep Neural Networks Using Binary Matrix Factorization
Alireza Bordbar
M. Kahaei
MQ
15
0
0
24 Oct 2022
Nonlocal optimization of binary neural networks
Nonlocal optimization of binary neural networks
Amir Khoshaman
Giuseppe Castiglione
C. Srinivasa
11
0
0
05 Apr 2022
Quantization in Layer's Input is Matter
Quantization in Layer's Input is Matter
Daning Cheng
Wenguang Chen
MQ
11
0
0
10 Feb 2022
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in
  Edge-Cloud Systems
Croesus: Multi-Stage Processing and Transactions for Video-Analytics in Edge-Cloud Systems
Samaa Gazzaz
Vishal Chakraborty
Faisal Nawab
20
10
0
31 Dec 2021
Multi-Glimpse Network: A Robust and Efficient Classification
  Architecture based on Recurrent Downsampled Attention
Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention
S. Tan
Runpei Dong
Kaisheng Ma
22
2
0
03 Nov 2021
Neural network relief: a pruning algorithm based on neural activity
Neural network relief: a pruning algorithm based on neural activity
Aleksandr Dekhovich
David Tax
M. Sluiter
Miguel A. Bessa
37
10
0
22 Sep 2021
A Survey on GAN Acceleration Using Memory Compression Technique
A Survey on GAN Acceleration Using Memory Compression Technique
Dina Tantawy
Mohamed Zahran
A. Wassal
28
8
0
14 Aug 2021
Differentiable Model Compression via Pseudo Quantization Noise
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffM
MQ
10
46
0
20 Apr 2021
COIN: COmpression with Implicit Neural representations
COIN: COmpression with Implicit Neural representations
Emilien Dupont
Adam Goliñski
Milad Alizadeh
Yee Whye Teh
Arnaud Doucet
8
223
0
03 Mar 2021
SeReNe: Sensitivity based Regularization of Neurons for Structured
  Sparsity in Neural Networks
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
A. Fiandrotti
Marco Grangetto
38
18
0
07 Feb 2021
Rethinking Weight Decay For Efficient Neural Network Pruning
Rethinking Weight Decay For Efficient Neural Network Pruning
Hugo Tessier
Vincent Gripon
Mathieu Léonardon
M. Arzel
T. Hannagan
David Bertrand
13
25
0
20 Nov 2020
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Dynamic Hard Pruning of Neural Networks at the Edge of the Internet
Lorenzo Valerio
F. M. Nardini
A. Passarella
R. Perego
12
12
0
17 Nov 2020
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural
  networks
LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks
Enzo Tartaglione
Andrea Bragagnolo
A. Fiandrotti
Marco Grangetto
ODL
UQCV
11
34
0
16 Nov 2020
Dirichlet Pruning for Neural Network Compression
Dirichlet Pruning for Neural Network Compression
Kamil Adamczewski
Mijung Park
22
3
0
10 Nov 2020
Pruning Convolutional Filters using Batch Bridgeout
Pruning Convolutional Filters using Batch Bridgeout
Najeeb Khan
Ian Stavness
8
3
0
23 Sep 2020
An Overview of Neural Network Compression
An Overview of Neural Network Compression
James OÑeill
AI4CE
40
98
0
05 Jun 2020
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge
  Applications: A Survey
Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey
Jiayi Liu
S. Tripathi
Unmesh Kurup
Mohak Shah
3DPC
MedIm
9
52
0
08 May 2020
Pruning artificial neural networks: a way to find well-generalizing,
  high-entropy sharp minima
Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima
Enzo Tartaglione
Andrea Bragagnolo
Marco Grangetto
11
11
0
30 Apr 2020
Uncertainty Quantification for Sparse Deep Learning
Uncertainty Quantification for Sparse Deep Learning
Yuexi Wang
Veronika Rockova
BDL
UQCV
13
31
0
26 Feb 2020
Communication-Efficient Edge AI: Algorithms and Systems
Communication-Efficient Edge AI: Algorithms and Systems
Yuanming Shi
Kai Yang
Tao Jiang
Jun Zhang
Khaled B. Letaief
GNN
17
326
0
22 Feb 2020
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
  Learning
Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning
Arsenii Ashukha
Alexander Lyzhov
Dmitry Molchanov
Dmitry Vetrov
UQCV
FedML
17
314
0
15 Feb 2020
HiLLoC: Lossless Image Compression with Hierarchical Latent Variable
  Models
HiLLoC: Lossless Image Compression with Hierarchical Latent Variable Models
James Townsend
Thomas Bird
Julius Kunze
David Barber
BDL
VLM
11
56
0
20 Dec 2019
Iteratively Training Look-Up Tables for Network Quantization
Iteratively Training Look-Up Tables for Network Quantization
Fabien Cardinaux
Stefan Uhlich
K. Yoshiyama
Javier Alonso García
Lukas Mauch
Stephen Tiedemann
Thomas Kemp
Akira Nakamura
MQ
19
16
0
12 Nov 2019
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
Simon Wiedemann
H. Kirchhoffer
Stefan Matlage
Paul Haase
Arturo Marbán
...
Ahmed Osman
D. Marpe
H. Schwarz
Thomas Wiegand
Wojciech Samek
31
92
0
27 Jul 2019
Learning Multimodal Fixed-Point Weights using Gradient Descent
Learning Multimodal Fixed-Point Weights using Gradient Descent
Lukas Enderich
Fabian Timm
Lars Rosenbaum
Wolfram Burgard
MQ
17
9
0
16 Jul 2019
Sparse Networks from Scratch: Faster Training without Losing Performance
Sparse Networks from Scratch: Faster Training without Losing Performance
Tim Dettmers
Luke Zettlemoyer
20
333
0
10 Jul 2019
Constructing Energy-efficient Mixed-precision Neural Networks through
  Principal Component Analysis for Edge Intelligence
Constructing Energy-efficient Mixed-precision Neural Networks through Principal Component Analysis for Edge Intelligence
I. Chakraborty
Deboleena Roy
Isha Garg
Aayush Ankit
Kaushik Roy
17
37
0
04 Jun 2019
Progressive Weight Pruning of Deep Neural Networks using ADMM
Progressive Weight Pruning of Deep Neural Networks using ADMM
Shaokai Ye
Tianyun Zhang
Kaiqi Zhang
Jiayu Li
Kaidi Xu
...
M. Fardad
Sijia Liu
Xiang Chen
X. Lin
Yanzhi Wang
AI4CE
21
38
0
17 Oct 2018
Rate Distortion For Model Compression: From Theory To Practice
Rate Distortion For Model Compression: From Theory To Practice
Weihao Gao
Yu-Han Liu
Chong-Jun Wang
Sewoong Oh
17
31
0
09 Oct 2018
Relaxed Quantization for Discretized Neural Networks
Relaxed Quantization for Discretized Neural Networks
Christos Louizos
M. Reisser
Tijmen Blankevoort
E. Gavves
Max Welling
MQ
14
131
0
03 Oct 2018
Probabilistic Binary Neural Networks
Probabilistic Binary Neural Networks
Jorn W. T. Peters
Max Welling
BDL
UQCV
MQ
9
50
0
10 Sep 2018
MPDCompress - Matrix Permutation Decomposition Algorithm for Deep Neural
  Network Compression
MPDCompress - Matrix Permutation Decomposition Algorithm for Deep Neural Network Compression
Lazar Supic
R. Naous
Ranko Sredojevic
Aleksandra Faust
Vladimir M. Stojanović
17
4
0
30 May 2018
Scalable Methods for 8-bit Training of Neural Networks
Scalable Methods for 8-bit Training of Neural Networks
Ron Banner
Itay Hubara
Elad Hoffer
Daniel Soudry
MQ
20
329
0
25 May 2018
Compressing Neural Networks using the Variational Information Bottleneck
Compressing Neural Networks using the Variational Information Bottleneck
Bin Dai
Chen Zhu
David Wipf
MLT
20
178
0
28 Feb 2018
Bayesian Incremental Learning for Deep Neural Networks
Bayesian Incremental Learning for Deep Neural Networks
Max Kochurov
T. Garipov
D. Podoprikhin
Dmitry Molchanov
Arsenii Ashukha
Dmitry Vetrov
OOD
CLL
BDL
8
22
0
20 Feb 2018
The Description Length of Deep Learning Models
The Description Length of Deep Learning Models
Léonard Blier
Yann Ollivier
24
95
0
20 Feb 2018
A Scalable Near-Memory Architecture for Training Deep Neural Networks on
  Large In-Memory Datasets
A Scalable Near-Memory Architecture for Training Deep Neural Networks on Large In-Memory Datasets
Fabian Schuiki
Michael Schaffner
Frank K. Gürkaynak
Luca Benini
21
70
0
19 Feb 2018
BitNet: Bit-Regularized Deep Neural Networks
BitNet: Bit-Regularized Deep Neural Networks
Aswin Raghavan
Mohamed R. Amer
S. Chai
Graham Taylor
MQ
22
10
0
16 Aug 2017
Bayesian Compression for Deep Learning
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCV
BDL
15
479
0
24 May 2017
Cnvlutin2: Ineffectual-Activation-and-Weight-Free Deep Neural Network
  Computing
Cnvlutin2: Ineffectual-Activation-and-Weight-Free Deep Neural Network Computing
Patrick Judd
Alberto Delmas Lascorz
Sayeh Sharify
Andreas Moshovos
16
27
0
29 Apr 2017
1