ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.08171
  4. Cited By
Fixed-point optimization of deep neural networks with adaptive step size
  retraining

Fixed-point optimization of deep neural networks with adaptive step size retraining

27 February 2017
Sungho Shin
Yoonho Boo
Wonyong Sung
    MQ
ArXivPDFHTML

Papers citing "Fixed-point optimization of deep neural networks with adaptive step size retraining"

4 / 4 papers shown
Title
Quantune: Post-training Quantization of Convolutional Neural Networks
  using Extreme Gradient Boosting for Fast Deployment
Quantune: Post-training Quantization of Convolutional Neural Networks using Extreme Gradient Boosting for Fast Deployment
Jemin Lee
Misun Yu
Yongin Kwon
Teaho Kim
MQ
9
17
0
10 Feb 2022
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAML
MQ
20
18
0
16 Apr 2021
BitNet: Bit-Regularized Deep Neural Networks
BitNet: Bit-Regularized Deep Neural Networks
Aswin Raghavan
Mohamed R. Amer
S. Chai
Graham Taylor
MQ
22
10
0
16 Aug 2017
Streaming Architecture for Large-Scale Quantized Neural Networks on an
  FPGA-Based Dataflow Platform
Streaming Architecture for Large-Scale Quantized Neural Networks on an FPGA-Based Dataflow Platform
Chaim Baskin
Natan Liss
Evgenii Zheltonozhskii
A. Bronstein
A. Mendelson
GNN
MQ
25
35
0
31 Jul 2017
1