ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.13556
  4. Cited By
Deformable Butterfly: A Highly Structured and Sparse Linear Transform

Deformable Butterfly: A Highly Structured and Sparse Linear Transform

Neural Information Processing Systems (NeurIPS), 2022
25 March 2022
R. Lin
Jie Ran
King Hung Chiu
Grazinao Chesi
Ngai Wong
ArXiv (abs)PDFHTMLGithub (12★)

Papers citing "Deformable Butterfly: A Highly Structured and Sparse Linear Transform"

9 / 9 papers shown
Efficient Tensor Completion Algorithms for Highly Oscillatory Operators
Efficient Tensor Completion Algorithms for Highly Oscillatory Operators
Navjot Singh
Edgar Solomonik
Xiaoye Li
Yang Liu
206
0
0
20 Oct 2025
Dimension Mixer: A Generalized Method for Structured Sparsity in Deep
  Neural Networks
Dimension Mixer: A Generalized Method for Structured Sparsity in Deep Neural Networks
Suman Sapkota
Binod Bhattarai
258
0
0
30 Nov 2023
Lite it fly: An All-Deformable-Butterfly Network
Lite it fly: An All-Deformable-Butterfly NetworkIEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2023
Rui Lin
Jason Chun Lok Li
Jiajun Zhou
Binxiao Huang
Jie Ran
Ngai Wong
176
1
0
14 Nov 2023
Does a sparse ReLU network training problem always admit an optimum?
Does a sparse ReLU network training problem always admit an optimum?Neural Information Processing Systems (NeurIPS), 2023
Quoc-Tung Le
E. Riccietti
Rémi Gribonval
212
3
0
05 Jun 2023
Sparsity in neural networks can improve their privacy
Antoine Gonon
Léon Zheng
Clément Lalanne
Quoc-Tung Le
Guillaume Lauga
Can Pouliquen
245
2
0
20 Apr 2023
Can sparsity improve the privacy of neural networks?
Can sparsity improve the privacy of neural networks?
Antoine Gonon
Léon Zheng
Clément Lalanne
Quoc-Tung Le
Guillaume Lauga
Can Pouliquen
144
1
0
11 Apr 2023
Simple Hardware-Efficient Long Convolutions for Sequence Modeling
Simple Hardware-Efficient Long Convolutions for Sequence ModelingInternational Conference on Machine Learning (ICML), 2023
Daniel Y. Fu
Elliot L. Epstein
Eric N. D. Nguyen
A. Thomas
Michael Zhang
Tri Dao
Atri Rudra
Christopher Ré
207
66
0
13 Feb 2023
Monarch: Expressive Structured Matrices for Efficient and Accurate
  Training
Monarch: Expressive Structured Matrices for Efficient and Accurate TrainingInternational Conference on Machine Learning (ICML), 2022
Tri Dao
Beidi Chen
N. Sohoni
Arjun D Desai
Michael Poli
Jessica Grogan
Alexander Liu
Aniruddh Rao
Atri Rudra
Christopher Ré
325
116
0
01 Apr 2022
Efficient Identification of Butterfly Sparse Matrix Factorizations
Efficient Identification of Butterfly Sparse Matrix Factorizations
Léon Zheng
E. Riccietti
Rémi Gribonval
717
8
0
04 Oct 2021
1
Page 1 of 1