ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.01489
  4. Cited By
Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space
  Search

Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search

2 March 2021
Kartik Hegde
Po-An Tsai
Sitao Huang
Vikas Chandra
A. Parashar
Christopher W. Fletcher
ArXivPDFHTML

Papers citing "Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search"

13 / 13 papers shown
Title
Workload-Aware Hardware Accelerator Mining for Distributed Deep Learning
  Training
Workload-Aware Hardware Accelerator Mining for Distributed Deep Learning Training
Muhammad Adnan
Amar Phanishayee
Janardhan Kulkarni
Prashant J. Nair
Divyat Mahajan
29
0
0
23 Apr 2024
Target-independent XLA optimization using Reinforcement Learning
Target-independent XLA optimization using Reinforcement Learning
Milan Ganai
Haichen Li
Theodore Enns
Yida Wang
Randy Huang
32
0
0
28 Aug 2023
SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN
  Accelerators
SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators
Victor J. B. Jung
Arne Symons
L. Mei
Marian Verhelst
Luca Benini
8
3
0
20 Apr 2023
DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space
  for DNN Accelerators through Analytical Modeling
DeFiNES: Enabling Fast Exploration of the Depth-first Scheduling Space for DNN Accelerators through Analytical Modeling
L. Mei
Koen Goetschalckx
Arne Symons
Marian Verhelst
31
28
0
10 Dec 2022
Demystifying Map Space Exploration for NPUs
Demystifying Map Space Exploration for NPUs
Sheng-Chun Kao
A. Parashar
Po-An Tsai
T. Krishna
30
11
0
07 Oct 2022
HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time
  and Robustness
HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness
D. Rieber
Moritz Reiber
Oliver Bringmann
Holger Fröning
16
4
0
31 May 2022
DNNFuser: Generative Pre-Trained Transformer as a Generalized Mapper for
  Layer Fusion in DNN Accelerators
DNNFuser: Generative Pre-Trained Transformer as a Generalized Mapper for Layer Fusion in DNN Accelerators
Sheng-Chun Kao
Xiaoyu Huang
T. Krishna
AI4CE
33
9
0
26 Jan 2022
Data-Driven Offline Optimization For Architecting Hardware Accelerators
Data-Driven Offline Optimization For Architecting Hardware Accelerators
Aviral Kumar
Amir Yazdanbakhsh
Milad Hashemi
Kevin Swersky
Sergey Levine
17
36
0
20 Oct 2021
FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks
FLAT: An Optimized Dataflow for Mitigating Attention Bottlenecks
Sheng-Chun Kao
Suvinay Subramanian
Gaurav Agrawal
Amir Yazdanbakhsh
T. Krishna
30
57
0
13 Jul 2021
CoSA: Scheduling by Constrained Optimization for Spatial Accelerators
CoSA: Scheduling by Constrained Optimization for Spatial Accelerators
Qijing Huang
Minwoo Kang
Grace Dinh
Thomas Norell
Aravind Kalaiah
J. Demmel
J. Wawrzynek
Y. Shao
13
105
0
05 May 2021
Neural Architecture Search with Reinforcement Learning
Neural Architecture Search with Reinforcement Learning
Barret Zoph
Quoc V. Le
264
5,326
0
05 Nov 2016
Efficient Estimation of Word Representations in Vector Space
Efficient Estimation of Word Representations in Vector Space
Tomáš Mikolov
Kai Chen
G. Corrado
J. Dean
3DV
228
31,253
0
16 Jan 2013
A Large Population Size Can Be Unhelpful in Evolutionary Algorithms
A Large Population Size Can Be Unhelpful in Evolutionary Algorithms
Tianshi Chen
Ke Tang
Guoliang Chen
Xin Yao
62
55
0
11 Aug 2012
1