ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1605.06402
  4. Cited By
Ristretto: Hardware-Oriented Approximation of Convolutional Neural
  Networks

Ristretto: Hardware-Oriented Approximation of Convolutional Neural Networks

20 May 2016
Philipp Gysel
ArXivPDFHTML

Papers citing "Ristretto: Hardware-Oriented Approximation of Convolutional Neural Networks"

14 / 14 papers shown
Title
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural
  Networks
AutoQNN: An End-to-End Framework for Automatically Quantizing Neural Networks
Cheng Gong
Ye Lu
Surong Dai
Deng Qian
Chenkun Du
Tao Li
MQ
27
0
0
07 Apr 2023
AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch
AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch
Dimitrios Danopoulos
Georgios Zervakis
K. Siozios
Dimitrios Soudris
J. Henkel
22
31
0
08 Mar 2022
Speedup deep learning models on GPU by taking advantage of efficient
  unstructured pruning and bit-width reduction
Speedup deep learning models on GPU by taking advantage of efficient unstructured pruning and bit-width reduction
Marcin Pietroñ
Dominik Zurek
14
13
0
28 Dec 2021
TMA: Tera-MACs/W Neural Hardware Inference Accelerator with a
  Multiplier-less Massive Parallel Processor
TMA: Tera-MACs/W Neural Hardware Inference Accelerator with a Multiplier-less Massive Parallel Processor
Hyunbin Park
Dohyun Kim
Shiho Kim
BDL
14
1
0
08 Sep 2019
GDRQ: Group-based Distribution Reshaping for Quantization
GDRQ: Group-based Distribution Reshaping for Quantization
Haibao Yu
Tuopu Wen
Guangliang Cheng
Jiankai Sun
Qi Han
Jianping Shi
MQ
25
3
0
05 Aug 2019
Optimally Scheduling CNN Convolutions for Efficient Memory Access
Optimally Scheduling CNN Convolutions for Efficient Memory Access
Arthur Stoutchinin
Francesco Conti
Luca Benini
22
43
0
04 Feb 2019
Deep Positron: A Deep Neural Network Using the Posit Number System
Deep Positron: A Deep Neural Network Using the Posit Number System
Zachariah Carmichael
Seyed Hamed Fatemi Langroudi
Char Khazanov
Jeffrey Lillie
J. Gustafson
Dhireesha Kudithipudi
MQ
9
96
0
05 Dec 2018
QUENN: QUantization Engine for low-power Neural Networks
QUENN: QUantization Engine for low-power Neural Networks
Miguel de Prado
Maurizio Denna
Luca Benini
Nuria Pazos
MQ
24
14
0
14 Nov 2018
Quantization for Rapid Deployment of Deep Neural Networks
Quantization for Rapid Deployment of Deep Neural Networks
J. Lee
Sangwon Ha
Saerom Choi
Won-Jo Lee
Seungwon Lee
MQ
14
48
0
12 Oct 2018
Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of
  Deep Convolutional Neural Networks
Stacked Filters Stationary Flow For Hardware-Oriented Acceleration Of Deep Convolutional Neural Networks
Yuechao Gao
Nianhong Liu
Shenmin Zhang
11
0
0
23 Jan 2018
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural
  Networks with Reduced Numerical Precision Weights and Activation
ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation
Moritz B. Milde
Daniel Neil
Alessandro Aimar
T. Delbruck
Giacomo Indiveri
MQ
24
9
0
13 Nov 2017
Minimum Energy Quantized Neural Networks
Minimum Energy Quantized Neural Networks
Bert Moons
Koen Goetschalckx
Nick Van Berckelaer
Marian Verhelst
MQ
19
123
0
01 Nov 2017
Bayesian Compression for Deep Learning
Bayesian Compression for Deep Learning
Christos Louizos
Karen Ullrich
Max Welling
UQCV
BDL
15
479
0
24 May 2017
Exploring the Design Space of Deep Convolutional Neural Networks at
  Large Scale
Exploring the Design Space of Deep Convolutional Neural Networks at Large Scale
F. Iandola
3DV
24
18
0
20 Dec 2016
1