ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.04606
  4. Cited By
SparseTIR: Composable Abstractions for Sparse Compilation in Deep
  Learning

SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning

11 July 2022
Zihao Ye
Ruihang Lai
Junru Shao
Tianqi Chen
Luis Ceze
ArXivPDFHTML

Papers citing "SparseTIR: Composable Abstractions for Sparse Compilation in Deep Learning"

8 / 8 papers shown
Title
Hexcute: A Tile-based Programming Language with Automatic Layout and Task-Mapping Synthesis
Hexcute: A Tile-based Programming Language with Automatic Layout and Task-Mapping Synthesis
X. Zhang
Yaoyao Ding
Yang Hu
Gennady Pekhimenko
34
0
0
22 Apr 2025
FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
FlashInfer: Efficient and Customizable Attention Engine for LLM Inference Serving
Zihao Ye
Lequn Chen
Ruihang Lai
Wuwei Lin
Yineng Zhang
...
Tianqi Chen
Baris Kasikci
Vinod Grover
Arvind Krishnamurthy
Luis Ceze
42
19
0
02 Jan 2025
Relax: Composable Abstractions for End-to-End Dynamic Machine Learning
Relax: Composable Abstractions for End-to-End Dynamic Machine Learning
Ruihang Lai
Junru Shao
Siyuan Feng
Steven Lyubomirsky
Bohan Hou
...
Sunghyun Park
Prakalp Srivastava
Jared Roesch
T. Mowry
Tianqi Chen
32
7
0
01 Nov 2023
Efficient Quantized Sparse Matrix Operations on Tensor Cores
Efficient Quantized Sparse Matrix Operations on Tensor Cores
Shigang Li
Kazuki Osawa
Torsten Hoefler
69
26
0
14 Sep 2022
Sgap: Towards Efficient Sparse Tensor Algebra Compilation for GPU
Sgap: Towards Efficient Sparse Tensor Algebra Compilation for GPU
Genghan Zhang
Yuetong Zhao
Yanting Tao
Zhongming Yu
Guohao Dai
Sitao Huang
Yuanyuan Wen
Pavlos Petoumenos
Yu Wang
27
4
0
07 Sep 2022
Accelerating SpMM Kernel with Cache-First Edge Sampling for Graph Neural
  Networks
Accelerating SpMM Kernel with Cache-First Edge Sampling for Graph Neural Networks
Chien-Yu Lin
Liang Luo
Luis Ceze
GNN
50
6
0
21 Apr 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
128
679
0
31 Jan 2021
Heterogeneous Graph Transformer
Heterogeneous Graph Transformer
Ziniu Hu
Yuxiao Dong
Kuansan Wang
Yizhou Sun
169
1,157
0
03 Mar 2020
1