ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.07514
  4. Cited By
High-Throughput In-Memory Computing for Binary Deep Neural Networks with
  Monolithically Integrated RRAM and 90nm CMOS

High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS

IEEE Transactions on Electron Devices (IEEE TED), 2019
16 September 2019
Shihui Yin
Xiaoyu Sun
Shimeng Yu
Jae-sun Seo
    MQ
ArXiv (abs)PDFHTML

Papers citing "High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS"

11 / 11 papers shown
CNN-Based Automated Parameter Extraction Framework for Modeling Memristive Devices
CNN-Based Automated Parameter Extraction Framework for Modeling Memristive Devices
Akif Hamid
Orchi Hassan
53
0
0
11 Nov 2025
Current Opinions on Memristor-Accelerated Machine Learning HardwareCurrent opinion in solid state & materials science (OSSMS), 2025
Mingrui Jiang
Yichun Xu
Zefan Li
Can Li
155
5
0
22 Jan 2025
Approximate ADCs for In-Memory Computing
Approximate ADCs for In-Memory Computing
Arkapravo Ghosh
Hemkar Reddy Sadana
Mukut Debnath
Panthadip Maji
Shubham Negi
Sumeet Gupta
M. Sharad
Kaushik Roy
103
0
0
11 Aug 2024
StoX-Net: Stochastic Processing of Partial Sums for Efficient In-Memory
  Computing DNN Accelerators
StoX-Net: Stochastic Processing of Partial Sums for Efficient In-Memory Computing DNN Accelerators
Ethan G Rogers
Sohan Salahuddin Mugdho
Kshemal Kshemendra Gupte
Cheng Wang
112
1
0
17 Jul 2024
Towards Efficient In-memory Computing Hardware for Quantized Neural
  Networks: State-of-the-art, Open Challenges and Perspectives
Towards Efficient In-memory Computing Hardware for Quantized Neural Networks: State-of-the-art, Open Challenges and PerspectivesIEEE transactions on nanotechnology (IEEE Trans. Nanotechnol.), 2023
O. Krestinskaya
Li Zhang
K. Salama
182
13
0
08 Jul 2023
Heterogeneous Integration of In-Memory Analog Computing Architectures
  with Tensor Processing Units
Heterogeneous Integration of In-Memory Analog Computing Architectures with Tensor Processing UnitsACM Great Lakes Symposium on VLSI (GLSVLSI), 2023
Mohammed E. Elbtity
Brendan Reidy
Md Hasibul Amin
Ramtin Zand
146
10
0
18 Apr 2023
A Co-design view of Compute in-Memory with Non-Volatile Elements for
  Neural Networks
A Co-design view of Compute in-Memory with Non-Volatile Elements for Neural Networks
W. Haensch
A. Raghunathan
Kaushik Roy
B. Chakrabarti
C. Phatak
Cheng Wang
Supratik Guha
156
3
0
03 Jun 2022
Interconnect Parasitics and Partitioning in Fully-Analog In-Memory
  Computing Architectures
Interconnect Parasitics and Partitioning in Fully-Analog In-Memory Computing ArchitecturesInternational Symposium on Circuits and Systems (ISCAS), 2022
Md Hasibul Amin
Mohammed E. Elbtity
Ramtin Zand
65
14
0
29 Jan 2022
An In-Memory Analog Computing Co-Processor for Energy-Efficient CNN
  Inference on Mobile Devices
An In-Memory Analog Computing Co-Processor for Energy-Efficient CNN Inference on Mobile DevicesIEEE Computer Society Annual Symposium on VLSI (ISVLSI), 2021
Mohammed E. Elbtity
Abhishek Singh
Brendan Reidy
Xiaochen Guo
Ramtin Zand
94
21
0
24 May 2021
Mitigating Adversarial Attack for Compute-in-Memory Accelerator
  Utilizing On-chip Finetune
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip FinetuneIEEE Non-Volatile Memory System and Applications Symposium (NVMSA), 2021
Shanshi Huang
Hongwu Jiang
Shimeng Yu
AAML
150
5
0
13 Apr 2021
Exploring the Connection Between Binary and Spiking Neural Networks
Exploring the Connection Between Binary and Spiking Neural NetworksFrontiers in Neuroscience (Front. Neurosci.), 2020
Sen Lu
Abhronil Sengupta
MQ
186
112
0
24 Feb 2020
1
Page 1 of 1