ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.08066
  4. Cited By
Trained Quantization Thresholds for Accurate and Efficient Fixed-Point
  Inference of Deep Neural Networks

Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks

19 March 2019
Sambhav R. Jain
Albert Gural
Michael Wu
Chris Dick
    MQ
ArXivPDFHTML

Papers citing "Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks"

30 / 30 papers shown
Title
Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation
Dedicated Inference Engine and Binary-Weight Neural Networks for Lightweight Instance Segmentation
Tse-Wei Chen
Wei Tao
Dongyue Zhao
Kazuhiro Mima
Tadayuki Ito
Kinya Osa
Masami Kato
MQ
31
0
0
03 Jan 2025
On the Impact of White-box Deployment Strategies for Edge AI on Latency and Model Performance
On the Impact of White-box Deployment Strategies for Edge AI on Latency and Model Performance
Jaskirat Singh
Bram Adams
Ahmed E. Hassan
VLM
38
0
0
01 Nov 2024
P$^2$-ViT: Power-of-Two Post-Training Quantization and Acceleration for
  Fully Quantized Vision Transformer
P2^22-ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision Transformer
Huihong Shi
Xin Cheng
Wendong Mao
Zhongfeng Wang
MQ
40
3
0
30 May 2024
xTern: Energy-Efficient Ternary Neural Network Inference on RISC-V-Based
  Edge Systems
xTern: Energy-Efficient Ternary Neural Network Inference on RISC-V-Based Edge Systems
Georg Rutishauser
Joan Mihali
Moritz Scherer
Luca Benini
24
1
0
29 May 2024
Selective Focus: Investigating Semantics Sensitivity in Post-training
  Quantization for Lane Detection
Selective Focus: Investigating Semantics Sensitivity in Post-training Quantization for Lane Detection
Yunqian Fan
Xiuying Wei
Ruihao Gong
Yuqing Ma
Xiangguo Zhang
Qi Zhang
Xianglong Liu
MQ
27
2
0
10 May 2024
AdaQAT: Adaptive Bit-Width Quantization-Aware Training
AdaQAT: Adaptive Bit-Width Quantization-Aware Training
Cédric Gernigon
Silviu-Ioan Filip
Olivier Sentieys
Clément Coggiola
Mickael Bruno
23
2
0
22 Apr 2024
Efficient Neural PDE-Solvers using Quantization Aware Training
Efficient Neural PDE-Solvers using Quantization Aware Training
W.V.S.O. van den Dool
Tijmen Blankevoort
Max Welling
Yuki M. Asano
MQ
27
3
0
14 Aug 2023
MRQ:Support Multiple Quantization Schemes through Model Re-Quantization
MRQ:Support Multiple Quantization Schemes through Model Re-Quantization
Manasa Manohara
Sankalp Dayal
Tarqi Afzal
Rahul Bakshi
Kahkuen Fu
MQ
20
0
0
01 Aug 2023
Free Bits: Latency Optimization of Mixed-Precision Quantized Neural
  Networks on the Edge
Free Bits: Latency Optimization of Mixed-Precision Quantized Neural Networks on the Edge
Georg Rutishauser
Francesco Conti
Luca Benini
MQ
23
5
0
06 Jul 2023
$\rm A^2Q$: Aggregation-Aware Quantization for Graph Neural Networks
A2Q\rm A^2QA2Q: Aggregation-Aware Quantization for Graph Neural Networks
Zeyu Zhu
Fanrong Li
Zitao Mo
Qinghao Hu
Gang Li
Zejian Liu
Xiaoyao Liang
Jian Cheng
GNN
MQ
24
4
0
01 Feb 2023
QFT: Post-training quantization via fast joint finetuning of all degrees
  of freedom
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
21
6
0
05 Dec 2022
Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs,
  Mobile AI & AIM 2022 challenge: Report
Efficient and Accurate Quantized Image Super-Resolution on Mobile NPUs, Mobile AI & AIM 2022 challenge: Report
Andrey D. Ignatov
Radu Timofte
Maurizio Denna
Abdelbadie Younes
Ganzorig Gankhuyag
...
Jing Liu
Garas Gendy
Nabil Sabor
J. Hou
Guanghui He
SupR
MQ
20
31
0
07 Nov 2022
Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI &
  AIM 2022 Challenge: Report
Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI & AIM 2022 Challenge: Report
Andrey D. Ignatov
Grigory Malivenko
Radu Timofte
Lukasz Treszczotko
Xin-ke Chang
...
Dongwon Park
Seongmin Hong
Joonhee Lee
Seunggyu Lee
Sengsub Chun
25
17
0
07 Nov 2022
Learned Smartphone ISP on Mobile GPUs with Deep Learning, Mobile AI &
  AIM 2022 Challenge: Report
Learned Smartphone ISP on Mobile GPUs with Deep Learning, Mobile AI & AIM 2022 Challenge: Report
Andrey D. Ignatov
Radu Timofte
Shuai Liu
Chaoyu Feng
Furui Bai
...
Xin Lou
Wei Zhou
Cong Pang
Haina Qin
Mingxuan Cai
21
23
0
07 Nov 2022
SQuAT: Sharpness- and Quantization-Aware Training for BERT
SQuAT: Sharpness- and Quantization-Aware Training for BERT
Zheng Wang
Juncheng Billy Li
Shuhui Qu
Florian Metze
Emma Strubell
MQ
18
7
0
13 Oct 2022
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training
  Quantization
RAPQ: Rescuing Accuracy for Power-of-Two Low-bit Post-training Quantization
Hongyi Yao
Pu Li
Jian Cao
Xiangcheng Liu
Chenying Xie
Bin Wang
MQ
19
12
0
26 Apr 2022
LilNetX: Lightweight Networks with EXtreme Model Compression and
  Structured Sparsification
LilNetX: Lightweight Networks with EXtreme Model Compression and Structured Sparsification
Sharath Girish
Kamal Gupta
Saurabh Singh
Abhinav Shrivastava
28
11
0
06 Apr 2022
Resource-efficient Deep Neural Networks for Automotive Radar
  Interference Mitigation
Resource-efficient Deep Neural Networks for Automotive Radar Interference Mitigation
J. Rock
Wolfgang Roth
Máté Tóth
Paul Meissner
Franz Pernkopf
17
43
0
25 Jan 2022
Elastic-Link for Binarized Neural Network
Elastic-Link for Binarized Neural Network
Jie Hu
Ziheng Wu
Vince Tan
Zhilin Lu
Mengze Zeng
Enhua Wu
MQ
28
6
0
19 Dec 2021
AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On
  Analog Compute-in-Memory Accelerator
AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator
Chuteng Zhou
F. García-Redondo
Julian Büchel
I. Boybat
Xavier Timoneda Comas
S. Nandakumar
Shidhartha Das
A. Sebastian
M. Le Gallo
P. Whatmough
25
16
0
10 Nov 2021
Fast and Accurate Quantized Camera Scene Detection on Smartphones,
  Mobile AI 2021 Challenge: Report
Fast and Accurate Quantized Camera Scene Detection on Smartphones, Mobile AI 2021 Challenge: Report
Andrey D. Ignatov
Grigory Malivenko
Radu Timofte
Sheng Chen
Xin Xia
...
K. Lyda
L. Khojoyan
Abhishek Thanki
Sayak Paul
Shahid Siddiqui
MQ
15
20
0
17 May 2021
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of
  Quantization on Depthwise Separable Convolutional Networks Through the Eyes
  of Multi-scale Distributional Dynamics
Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of Quantization on Depthwise Separable Convolutional Networks Through the Eyes of Multi-scale Distributional Dynamics
S. Yun
Alexander Wong
MQ
19
25
0
24 Apr 2021
Differentiable Model Compression via Pseudo Quantization Noise
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffM
MQ
12
46
0
20 Apr 2021
End-to-end Keyword Spotting using Neural Architecture Search and
  Quantization
End-to-end Keyword Spotting using Neural Architecture Search and Quantization
David Peter
Wolfgang Roth
Franz Pernkopf
MQ
22
14
0
14 Apr 2021
Training Multi-bit Quantized and Binarized Networks with A Learnable
  Symmetric Quantizer
Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer
Phuoc Pham
J. Abraham
Jaeyong Chung
MQ
33
11
0
01 Apr 2021
Bringing AI To Edge: From Deep Learning's Perspective
Bringing AI To Edge: From Deep Learning's Perspective
Di Liu
Hao Kong
Xiangzhong Luo
Weichen Liu
Ravi Subramaniam
49
116
0
25 Nov 2020
Some Remarks on Replicated Simulated Annealing
Some Remarks on Replicated Simulated Annealing
Vicent Gripon
Matthias Löwe
Franck Vermet
14
2
0
30 Sep 2020
Towards Efficient Training for Neural Network Quantization
Towards Efficient Training for Neural Network Quantization
Qing Jin
Linjie Yang
Zhenyu A. Liao
MQ
11
42
0
21 Dec 2019
QKD: Quantization-aware Knowledge Distillation
QKD: Quantization-aware Knowledge Distillation
Jangho Kim
Yash Bhalgat
Jinwon Lee
Chirag I. Patel
Nojun Kwak
MQ
16
63
0
28 Nov 2019
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network
  Inference On Microcontrollers
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Manuele Rusci
Alessandro Capotondi
Luca Benini
MQ
14
74
0
30 May 2019
1