ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.06362
  4. Cited By
Attention, Distillation, and Tabularization: Towards Practical Neural
  Network-Based Prefetching

Attention, Distillation, and Tabularization: Towards Practical Neural Network-Based Prefetching

23 December 2023
Pengmiao Zhang
Neelesh Gupta
Rajgopal Kannan
Viktor K. Prasanna
ArXivPDFHTML

Papers citing "Attention, Distillation, and Tabularization: Towards Practical Neural Network-Based Prefetching"

2 / 2 papers shown
Title
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Pruning and Quantization for Deep Neural Network Acceleration: A Survey
Tailin Liang
C. Glossner
Lei Wang
Shaobo Shi
Xiaotong Zhang
MQ
124
665
0
24 Jan 2021
ShiftAddNet: A Hardware-Inspired Deep Network
ShiftAddNet: A Hardware-Inspired Deep Network
Haoran You
Xiaohan Chen
Yongan Zhang
Chaojian Li
Sicheng Li
Zihao Liu
Zhangyang Wang
Yingyan Lin
OOD
MQ
50
75
0
24 Oct 2020
1