ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1805.03718
  4. Cited By
Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks

Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks

9 May 2018
Charles Eckert
Xiaowei Wang
Jingcheng Wang
Arun K. Subramaniyan
R. Iyer
D. Sylvester
D. Blaauw
R. Das
    MQ
ArXivPDFHTML

Papers citing "Neural Cache: Bit-Serial In-Cache Acceleration of Deep Neural Networks"

16 / 16 papers shown
Title
DAISM: Digital Approximate In-SRAM Multiplier-based Accelerator for DNN
  Training and Inference
DAISM: Digital Approximate In-SRAM Multiplier-based Accelerator for DNN Training and Inference
Lorenzo Sonnino
Shaswot Shresthamali
Yuan He
Masaaki Kondo
14
1
0
12 May 2023
TransPimLib: A Library for Efficient Transcendental Functions on
  Processing-in-Memory Systems
TransPimLib: A Library for Efficient Transcendental Functions on Processing-in-Memory Systems
Maurus Item
Juan Gómez Luna
Yu-Yin Guo
Geraldo F. Oliveira
Mohammad Sadrosadati
O. Mutlu
29
4
0
03 Apr 2023
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of
  Quantized CNNs
RedBit: An End-to-End Flexible Framework for Evaluating the Accuracy of Quantized CNNs
A. M. Ribeiro-dos-Santos
João Dinis Ferreira
O. Mutlu
G. Falcão
MQ
15
1
0
15 Jan 2023
An Experimental Evaluation of Machine Learning Training on a Real
  Processing-in-Memory System
An Experimental Evaluation of Machine Learning Training on a Real Processing-in-Memory System
Juan Gómez Luna
Yu-Yin Guo
Sylvan Brocard
Julien Legriel
Remy Cimadomo
Geraldo F. Oliveira
Gagandeep Singh
O. Mutlu
VLM
33
14
0
16 Jul 2022
Heterogeneous Data-Centric Architectures for Modern Data-Intensive
  Applications: Case Studies in Machine Learning and Databases
Heterogeneous Data-Centric Architectures for Modern Data-Intensive Applications: Case Studies in Machine Learning and Databases
Geraldo F. Oliveira
Amirali Boroumand
Saugata Ghose
Juan Gómez Luna
O. Mutlu
26
7
0
29 May 2022
SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage
  Processing Architectures
SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures
Yunjae Lee
Jin-Won Chung
Minsoo Rhu
GNN
29
48
0
10 May 2022
Accelerating Attention through Gradient-Based Learned Runtime Pruning
Accelerating Attention through Gradient-Based Learned Runtime Pruning
Zheng Li
Soroush Ghodrati
Amir Yazdanbakhsh
H. Esmaeilzadeh
Mingu Kang
19
17
0
07 Apr 2022
Benchmarking Memory-Centric Computing Systems: Analysis of Real
  Processing-in-Memory Hardware
Benchmarking Memory-Centric Computing Systems: Analysis of Real Processing-in-Memory Hardware
Juan Gómez Luna
I. E. Hajj
Ivan Fernandez
Christina Giannoula
Geraldo F. Oliveira
O. Mutlu
30
67
0
04 Oct 2021
Benchmarking a New Paradigm: An Experimental Analysis of a Real
  Processing-in-Memory Architecture
Benchmarking a New Paradigm: An Experimental Analysis of a Real Processing-in-Memory Architecture
Juan Gómez Luna
I. E. Hajj
Ivan Fernandez
Christina Giannoula
Geraldo F. Oliveira
O. Mutlu
27
82
0
09 May 2021
DAMOV: A New Methodology and Benchmark Suite for Evaluating Data
  Movement Bottlenecks
DAMOV: A New Methodology and Benchmark Suite for Evaluating Data Movement Bottlenecks
Geraldo F. Oliveira
Juan Gómez Luna
Lois Orosa
Saugata Ghose
Nandita Vijaykumar
Ivan Fernandez
Mohammad Sadrosadati
O. Mutlu
36
82
0
08 May 2021
FPRaker: A Processing Element For Accelerating Neural Network Training
FPRaker: A Processing Element For Accelerating Neural Network Training
Omar Mohamed Awad
Mostafa Mahmoud
Isak Edo Vivancos
Ali Hadi Zadeh
Ciaran Bannon
Anand Jayarajan
Gennady Pekhimenko
Andreas Moshovos
20
15
0
15 Oct 2020
Timing Cache Accesses to Eliminate Side Channels in Shared Software
Timing Cache Accesses to Eliminate Side Channels in Shared Software
Divya Ojha
S. Dwarkadas
8
15
0
30 Sep 2020
IMAC: In-memory multi-bit Multiplication andACcumulation in 6T SRAM
  Array
IMAC: In-memory multi-bit Multiplication andACcumulation in 6T SRAM Array
M. Ali
Akhilesh R. Jaiswal
Sangamesh Kodge
Amogh Agrawal
I. Chakraborty
Kaushik Roy
MQ
11
84
0
27 Mar 2020
TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks
TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks
Shubham Jain
S. Gupta
A. Raghunathan
MQ
22
37
0
15 Sep 2019
A Workload and Programming Ease Driven Perspective of
  Processing-in-Memory
A Workload and Programming Ease Driven Perspective of Processing-in-Memory
Saugata Ghose
Amirali Boroumand
Jeremie S. Kim
Juan Gómez Luna
O. Mutlu
22
10
0
26 Jul 2019
RAPIDNN: In-Memory Deep Neural Network Acceleration Framework
RAPIDNN: In-Memory Deep Neural Network Acceleration Framework
Mohsen Imani
Mohammad Samragh
Yeseong Kim
Saransh Gupta
F. Koushanfar
Tajana Simunic
16
51
0
15 Jun 2018
1