ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.10245
  4. Cited By
ALTO: Adaptive Linearized Storage of Sparse Tensors
v1v2 (latest)

ALTO: Adaptive Linearized Storage of Sparse Tensors

International Conference on Supercomputing (ICS), 2021
20 February 2021
Ahmed E. Helal
Jan Laukemann
Fabio Checconi
Jesmin Jahan Tithi
Teresa M. Ranadive
Fabrizio Petrini
Jeewhan Choi
ArXiv (abs)PDFHTMLGithub (34★)

Papers citing "ALTO: Adaptive Linearized Storage of Sparse Tensors"

8 / 8 papers shown
ReLATE: Learning Efficient Sparse Encoding for High-Performance Tensor Decomposition
ReLATE: Learning Efficient Sparse Encoding for High-Performance Tensor Decomposition
Ahmed E. Helal
Fabio Checconi
Jan Laukemann
Yongseok Soh
Jesmin Jahan Tithi
Fabrizio Petrini
Jee W. Choi
173
0
0
29 Aug 2025
A Sparse Tensor Generator with Efficient Feature Extraction
A Sparse Tensor Generator with Efficient Feature Extraction
Tugba Torun
Eren Yenigul
Ameer Taweel
320
0
0
08 May 2024
Accelerating Sparse Tensor Decomposition Using Adaptive Linearized
  Representation
Accelerating Sparse Tensor Decomposition Using Adaptive Linearized RepresentationIEEE Transactions on Parallel and Distributed Systems (TPDS), 2024
Jan Laukemann
Ahmed E. Helal
S. I. G. Anderson
Fabio Checconi
Yongseok Soh
Jesmin Jahan Tithi
Teresa M. Ranadive
Brian J Gravelle
Fabrizio Petrini
Jeewhan Choi
272
3
0
11 Mar 2024
Dynasor: A Dynamic Memory Layout for Accelerating Sparse MTTKRP for
  Tensor Decomposition on Multi-core CPU
Dynasor: A Dynamic Memory Layout for Accelerating Sparse MTTKRP for Tensor Decomposition on Multi-core CPUSymposium on Computer Architecture and High Performance Computing (CAHPC), 2023
Sasindu Wijeratne
Rajgopal Kannan
Viktor Prasanna
272
6
0
17 Sep 2023
Performance Modeling Sparse MTTKRP Using Optical Static Random Access
  Memory on FPGA
Performance Modeling Sparse MTTKRP Using Optical Static Random Access Memory on FPGAIEEE Conference on High Performance Extreme Computing (HPEC), 2022
Sasindu Wijeratne
Akhilesh R. Jaiswal
Ajey P. Jacob
Bingyi Zhang
Viktor Prasanna
197
3
0
22 Aug 2022
Towards Programmable Memory Controller for Tensor Decomposition
Towards Programmable Memory Controller for Tensor DecompositionInternational Conference on Data Technologies and Applications (DATA), 2022
Sasindu Wijeratne
Ta-Yang Wang
Rajgopal Kannan
Viktor Prasanna
131
2
0
17 Jul 2022
Efficient, Out-of-Memory Sparse MTTKRP on Massively Parallel
  Architectures
Efficient, Out-of-Memory Sparse MTTKRP on Massively Parallel ArchitecturesInternational Conference on Supercomputing (ICS), 2022
A. Nguyen
Ahmed E. Helal
Fabio Checconi
Jan Laukemann
Jesmin Jahan Tithi
Yongseok Soh
Teresa M. Ranadive
Fabrizio Petrini
Jee W. Choi
233
10
0
29 Jan 2022
SparseP: Towards Efficient Sparse Matrix Vector Multiplication on Real
  Processing-In-Memory Systems
SparseP: Towards Efficient Sparse Matrix Vector Multiplication on Real Processing-In-Memory Systems
Christina Giannoula
Ivan Fernandez
Juan Gómez Luna
N. Koziris
G. Goumas
O. Mutlu
MoE
448
27
0
13 Jan 2022
1
Page 1 of 1