ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1903.01061
  4. Cited By
Learning low-precision neural networks without Straight-Through
  Estimator(STE)

Learning low-precision neural networks without Straight-Through Estimator(STE)

4 March 2019
Z. G. Liu
Matthew Mattina
    MQ
ArXivPDFHTML

Papers citing "Learning low-precision neural networks without Straight-Through Estimator(STE)"

8 / 8 papers shown
Title
Patch-wise Mixed-Precision Quantization of Vision Transformer
Patch-wise Mixed-Precision Quantization of Vision Transformer
Junrui Xiao
Zhikai Li
Lianwei Yang
Qingyi Gu
MQ
24
12
0
11 May 2023
QFT: Post-training quantization via fast joint finetuning of all degrees
  of freedom
QFT: Post-training quantization via fast joint finetuning of all degrees of freedom
Alexander Finkelstein
Ella Fuchs
Idan Tal
Mark Grobman
Niv Vosco
Eldad Meller
MQ
18
6
0
05 Dec 2022
Compiler-Aware Neural Architecture Search for On-Mobile Real-time
  Super-Resolution
Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution
Yushu Wu
Yifan Gong
Pu Zhao
Yanyu Li
Zheng Zhan
Wei Niu
Hao Tang
Minghai Qin
Bin Ren
Yanzhi Wang
SupR
MQ
27
23
0
25 Jul 2022
Differentiable Model Compression via Pseudo Quantization Noise
Differentiable Model Compression via Pseudo Quantization Noise
Alexandre Défossez
Yossi Adi
Gabriel Synnaeve
DiffM
MQ
10
46
0
20 Apr 2021
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Learnable Companding Quantization for Accurate Low-bit Neural Networks
Kohei Yamamoto
MQ
20
63
0
12 Mar 2021
Efficient Residue Number System Based Winograd Convolution
Efficient Residue Number System Based Winograd Convolution
Zhi-Gang Liu
Matthew Mattina
15
12
0
23 Jul 2020
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network
  Inference On Microcontrollers
Memory-Driven Mixed Low Precision Quantization For Enabling Deep Network Inference On Microcontrollers
Manuele Rusci
Alessandro Capotondi
Luca Benini
MQ
9
74
0
30 May 2019
Incremental Network Quantization: Towards Lossless CNNs with
  Low-Precision Weights
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
Aojun Zhou
Anbang Yao
Yiwen Guo
Lin Xu
Yurong Chen
MQ
311
1,047
0
10 Feb 2017
1