ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.20496
  4. Cited By
Toward Efficient Permutation for Hierarchical N:M Sparsity on GPUs

Toward Efficient Permutation for Hierarchical N:M Sparsity on GPUs

30 July 2024
Seungmin Yu
Xiaodie Yi
Hayun Lee
Dongkun Shin
ArXivPDFHTML

Papers citing "Toward Efficient Permutation for Hierarchical N:M Sparsity on GPUs"

2 / 2 papers shown
Title
Accelerated Sparse Neural Training: A Provable and Efficient Method to
  Find N:M Transposable Masks
Accelerated Sparse Neural Training: A Provable and Efficient Method to Find N:M Transposable Masks
Itay Hubara
Brian Chmiel
Moshe Island
Ron Banner
S. Naor
Daniel Soudry
44
110
0
16 Feb 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,460
0
23 Jan 2020
1