ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.06473
  4. Cited By
Quantized Convolutional Neural Networks for Mobile Devices

Quantized Convolutional Neural Networks for Mobile Devices

21 December 2015
Jiaxiang Wu
Cong Leng
Yuhang Wang
Qinghao Hu
Jian Cheng
    MQ
ArXivPDFHTML

Papers citing "Quantized Convolutional Neural Networks for Mobile Devices"

50 / 112 papers shown
Title
I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation
I2CKD : Intra- and Inter-Class Knowledge Distillation for Semantic Segmentation
Ayoub Karine
Thibault Napoléon
M. Jridi
VLM
101
0
0
24 Feb 2025
BlabberSeg: Real-Time Embedded Open-Vocabulary Aerial Segmentation
BlabberSeg: Real-Time Embedded Open-Vocabulary Aerial Segmentation
Haechan Mark Bong
Ricardo de Azambuja
Giovanni Beltrame
VLM
31
0
0
16 Oct 2024
Topological Persistence Guided Knowledge Distillation for Wearable
  Sensor Data
Topological Persistence Guided Knowledge Distillation for Wearable Sensor Data
Eun Som Jeon
Hongjun Choi
A. Shukla
Yuan Wang
Hyunglae Lee
M. Buman
P. Turaga
27
3
0
07 Jul 2024
An Empirical Investigation of Matrix Factorization Methods for
  Pre-trained Transformers
An Empirical Investigation of Matrix Factorization Methods for Pre-trained Transformers
Ashim Gupta
Sina Mahdipour Saravani
P. Sadayappan
Vivek Srikumar
24
2
0
17 Jun 2024
ATOM: Attention Mixer for Efficient Dataset Distillation
ATOM: Attention Mixer for Efficient Dataset Distillation
Samir Khaki
A. Sajedi
Kai Wang
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
38
3
0
02 May 2024
GPTVQ: The Blessing of Dimensionality for LLM Quantization
GPTVQ: The Blessing of Dimensionality for LLM Quantization
M. V. Baalen
Andrey Kuzmin
Markus Nagel
Peter Couperus
Cédric Bastoul
E. Mahurin
Tijmen Blankevoort
Paul N. Whatmough
MQ
34
28
0
23 Feb 2024
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource
  Constraints
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints
Francesco Corti
Balz Maag
Joachim Schauer
U. Pferschy
O. Saukh
24
2
0
22 Nov 2023
DataDAM: Efficient Dataset Distillation with Attention Matching
DataDAM: Efficient Dataset Distillation with Attention Matching
A. Sajedi
Samir Khaki
Ehsan Amjadian
Lucy Z. Liu
Y. Lawryshyn
Konstantinos N. Plataniotis
DD
37
59
0
29 Sep 2023
Systematic Architectural Design of Scale Transformed Attention Condenser
  DNNs via Multi-Scale Class Representational Response Similarity Analysis
Systematic Architectural Design of Scale Transformed Attention Condenser DNNs via Multi-Scale Class Representational Response Similarity Analysis
Andrew Hryniowski
Alexander Wong
8
0
0
16 Jun 2023
ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion
  E-Commerce
ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion E-Commerce
Amrollah Seifoddini
K. Vernooij
Timon Künzle
A. Canopoli
Malte F. Alf
Anna Volokitin
Reza Shirvany
3DH
10
0
0
15 Apr 2023
Performance-aware Approximation of Global Channel Pruning for Multitask
  CNNs
Performance-aware Approximation of Global Channel Pruning for Multitask CNNs
Hancheng Ye
Bo-Wen Zhang
Tao Chen
Jiayuan Fan
Bin Wang
21
18
0
21 Mar 2023
BiBench: Benchmarking and Analyzing Network Binarization
BiBench: Benchmarking and Analyzing Network Binarization
Haotong Qin
Mingyuan Zhang
Yifu Ding
Aoyu Li
Zhongang Cai
Ziwei Liu
F. I. F. Richard Yu
Xianglong Liu
MQ
AAML
16
36
0
26 Jan 2023
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Hyperspherical Quantization: Toward Smaller and More Accurate Models
Dan Liu
X. Chen
Chen-li Ma
Xue Liu
MQ
20
3
0
24 Dec 2022
Neural Network Compression by Joint Sparsity Promotion and Redundancy
  Reduction
Neural Network Compression by Joint Sparsity Promotion and Redundancy Reduction
T. M. Khan
Syed S. Naqvi
A. Robles-Kelly
Erik H. W. Meijering
31
7
0
14 Oct 2022
Deep Learning on Home Drone: Searching for the Optimal Architecture
Deep Learning on Home Drone: Searching for the Optimal Architecture
Alaa Maalouf
Yotam Gurfinkel
Barak Diker
O. Gal
Daniela Rus
Dan Feldman
10
5
0
21 Sep 2022
Mixed-Precision Neural Networks: A Survey
Mixed-Precision Neural Networks: A Survey
M. Rakka
M. Fouda
Pramod P. Khargonekar
Fadi J. Kurdahi
MQ
18
11
0
11 Aug 2022
Look-ups are not (yet) all you need for deep learning inference
Look-ups are not (yet) all you need for deep learning inference
Calvin McCarter
Nicholas Dronen
19
4
0
12 Jul 2022
QuantFace: Towards Lightweight Face Recognition by Synthetic Data
  Low-bit Quantization
QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization
Fadi Boutros
Naser Damer
Arjan Kuijper
CVBM
MQ
22
37
0
21 Jun 2022
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
OPQ: Compressing Deep Neural Networks with One-shot Pruning-Quantization
Peng Hu
Xi Peng
Hongyuan Zhu
M. Aly
Jie Lin
MQ
31
59
0
23 May 2022
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Knowledge Distillation Meets Open-Set Semi-Supervised Learning
Jing Yang
Xiatian Zhu
Adrian Bulat
Brais Martínez
Georgios Tzimiropoulos
29
7
0
13 May 2022
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Cross-Image Relational Knowledge Distillation for Semantic Segmentation
Chuanguang Yang
Helong Zhou
Zhulin An
Xue Jiang
Yong Xu
Qian Zhang
26
169
0
14 Apr 2022
OMAD: On-device Mental Anomaly Detection for Substance and Non-Substance
  Users
OMAD: On-device Mental Anomaly Detection for Substance and Non-Substance Users
Emon Dey
Nirmalya Roy
10
5
0
13 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
26
11
0
06 Apr 2022
Multi-task Learning Approach for Modulation and Wireless Signal
  Classification for 5G and Beyond: Edge Deployment via Model Compression
Multi-task Learning Approach for Modulation and Wireless Signal Classification for 5G and Beyond: Edge Deployment via Model Compression
Anu Jagannath
Jithin Jagannath
12
26
0
26 Feb 2022
Memory Planning for Deep Neural Networks
Memory Planning for Deep Neural Networks
Maksim Levental
11
4
0
23 Feb 2022
HRel: Filter Pruning based on High Relevance between Activation Maps and
  Class Labels
HRel: Filter Pruning based on High Relevance between Activation Maps and Class Labels
C. Sarvani
Mrinmoy Ghorai
S. Dubey
S. H. Shabbeer Basha
VLM
21
37
0
22 Feb 2022
Controlling the Quality of Distillation in Response-Based Network
  Compression
Controlling the Quality of Distillation in Response-Based Network Compression
Vibhas Kumar Vats
David J. Crandall
11
1
0
19 Dec 2021
Mixed Precision of Quantization of Transformer Language Models for
  Speech Recognition
Mixed Precision of Quantization of Transformer Language Models for Speech Recognition
Junhao Xu
Shoukang Hu
Jianwei Yu
Xunying Liu
Helen M. Meng
MQ
30
15
0
29 Nov 2021
Improved Knowledge Distillation via Adversarial Collaboration
Improved Knowledge Distillation via Adversarial Collaboration
Zhiqiang Liu
Chengkai Huang
Yanxia Liu
13
2
0
29 Nov 2021
Toward Compact Parameter Representations for Architecture-Agnostic
  Neural Network Compression
Toward Compact Parameter Representations for Architecture-Agnostic Neural Network Compression
Yuezhou Sun
Wenlong Zhao
Lijun Zhang
Xiao Liu
Hui Guan
Matei A. Zaharia
13
0
0
19 Nov 2021
Oracle Teacher: Leveraging Target Information for Better Knowledge
  Distillation of CTC Models
Oracle Teacher: Leveraging Target Information for Better Knowledge Distillation of CTC Models
J. Yoon
H. Kim
Hyeon Seung Lee
Sunghwan Ahn
N. Kim
26
1
0
05 Nov 2021
Smart at what cost? Characterising Mobile Deep Neural Networks in the
  wild
Smart at what cost? Characterising Mobile Deep Neural Networks in the wild
Mario Almeida
Stefanos Laskaridis
Abhinav Mehrotra
L. Dudziak
Ilias Leontiadis
Nicholas D. Lane
HAI
95
44
0
28 Sep 2021
Architecture Aware Latency Constrained Sparse Neural Networks
Architecture Aware Latency Constrained Sparse Neural Networks
Tianli Zhao
Qinghao Hu
Xiangyu He
Weixiang Xu
Jiaxing Wang
Cong Leng
Jian Cheng
20
0
0
01 Sep 2021
CoCo DistillNet: a Cross-layer Correlation Distillation Network for
  Pathological Gastric Cancer Segmentation
CoCo DistillNet: a Cross-layer Correlation Distillation Network for Pathological Gastric Cancer Segmentation
Wenxuan Zou
Muyi Sun
23
9
0
27 Aug 2021
Online Knowledge Distillation for Efficient Pose Estimation
Online Knowledge Distillation for Efficient Pose Estimation
Zheng Li
Jingwen Ye
Mingli Song
Ying Huang
Zhigeng Pan
12
92
0
04 Aug 2021
Bias Loss for Mobile Neural Networks
Bias Loss for Mobile Neural Networks
L. Abrahamyan
Valentin Ziatchin
Yiming Chen
Nikos Deligiannis
9
14
0
23 Jul 2021
Follow Your Path: a Progressive Method for Knowledge Distillation
Follow Your Path: a Progressive Method for Knowledge Distillation
Wenxian Shi
Yuxuan Song
Hao Zhou
Bohan Li
Lei Li
17
14
0
20 Jul 2021
APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU
  Tensor Cores
APNN-TC: Accelerating Arbitrary Precision Neural Networks on Ampere GPU Tensor Cores
Boyuan Feng
Yuke Wang
Tong Geng
Ang Li
Yufei Ding
MQ
11
37
0
23 Jun 2021
FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for
  Mixed-signal DNN Accelerator
FORMS: Fine-grained Polarized ReRAM-based In-situ Computation for Mixed-signal DNN Accelerator
Geng Yuan
Payman Behnam
Zhengang Li
Ali Shafiee
Sheng Lin
...
Hang Liu
Xuehai Qian
M. N. Bojnordi
Yanzhi Wang
Caiwen Ding
11
68
0
16 Jun 2021
Compact CNN Structure Learning by Knowledge Distillation
Compact CNN Structure Learning by Knowledge Distillation
Waqar Ahmed
Andrea Zunino
Pietro Morerio
Vittorio Murino
19
5
0
19 Apr 2021
"BNN - BN = ?": Training Binary Neural Networks without Batch
  Normalization
"BNN - BN = ?": Training Binary Neural Networks without Batch Normalization
Tianlong Chen
Zhenyu (Allen) Zhang
Xu Ouyang
Zechun Liu
Zhiqiang Shen
Zhangyang Wang
MQ
31
36
0
16 Apr 2021
Distilling Object Detectors via Decoupled Features
Distilling Object Detectors via Decoupled Features
Jianyuan Guo
Kai Han
Yunhe Wang
Han Wu
Xinghao Chen
Chunjing Xu
Chang Xu
24
199
0
26 Mar 2021
Compacting Deep Neural Networks for Internet of Things: Methods and
  Applications
Compacting Deep Neural Networks for Internet of Things: Methods and Applications
Ke Zhang
Hanbo Ying
Hongning Dai
Lin Li
Yuangyuang Peng
Keyi Guo
Hongfang Yu
16
38
0
20 Mar 2021
Learned Gradient Compression for Distributed Deep Learning
Learned Gradient Compression for Distributed Deep Learning
L. Abrahamyan
Yiming Chen
Giannis Bekoulis
Nikos Deligiannis
24
45
0
16 Mar 2021
hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power
  Machine Learning Devices
hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices
F. Fahim
B. Hawks
C. Herwig
J. Hirschauer
S. Jindariani
...
J. Ngadiuba
Miaoyuan Liu
Duc Hoang
E. Kreinar
Zhenbin Wu
20
129
0
09 Mar 2021
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision
  Neural Network Inference
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
Steve Dai
Rangharajan Venkatesan
Haoxing Ren
B. Zimmer
W. Dally
Brucek Khailany
MQ
23
67
0
08 Feb 2021
SeReNe: Sensitivity based Regularization of Neurons for Structured
  Sparsity in Neural Networks
SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks
Enzo Tartaglione
Andrea Bragagnolo
Francesco Odierna
A. Fiandrotti
Marco Grangetto
38
18
0
07 Feb 2021
Hybrid and Non-Uniform quantization methods using retro synthesis data
  for efficient inference
Hybrid and Non-Uniform quantization methods using retro synthesis data for efficient inference
Gvsl Tej Pratap
R. Kumar
MQ
11
1
0
26 Dec 2020
Parallel Blockwise Knowledge Distillation for Deep Neural Network
  Compression
Parallel Blockwise Knowledge Distillation for Deep Neural Network Compression
Cody Blakeney
Xiaomin Li
Yan Yan
Ziliang Zong
20
39
0
05 Dec 2020
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
36
116
0
25 Nov 2020
123
Next