ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.14107
  4. Cited By
Spartan: Differentiable Sparsity via Regularized Transportation

Spartan: Differentiable Sparsity via Regularized Transportation

27 May 2022
Kai Sheng Tai
Taipeng Tian
Ser-Nam Lim
ArXivPDFHTML

Papers citing "Spartan: Differentiable Sparsity via Regularized Transportation"

9 / 9 papers shown
Title
PETAH: Parameter Efficient Task Adaptation for Hybrid Transformers in a
  resource-limited Context
PETAH: Parameter Efficient Task Adaptation for Hybrid Transformers in a resource-limited Context
Maximilian Augustin
Syed Shakib Sarwar
Mostafa Elhoushi
Sai Qian Zhang
Yuecheng Li
B. D. Salvo
25
0
0
23 Oct 2024
Mixed Sparsity Training: Achieving 4$\times$ FLOP Reduction for
  Transformer Pretraining
Mixed Sparsity Training: Achieving 4×\times× FLOP Reduction for Transformer Pretraining
Pihe Hu
Shaolong Li
Longbo Huang
28
0
0
21 Aug 2024
Feather: An Elegant Solution to Effective DNN Sparsification
Feather: An Elegant Solution to Effective DNN Sparsification
Athanasios Glentis Georgoulakis
George Retsinas
Petros Maragos
24
0
0
03 Oct 2023
HyperSparse Neural Networks: Shifting Exploration to Exploitation
  through Adaptive Regularization
HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization
Patrick Glandorf
Timo Kaiser
Bodo Rosenhahn
38
5
0
14 Aug 2023
Differentiable Transportation Pruning
Differentiable Transportation Pruning
Yun-qiang Li
J. C. V. Gemert
Torsten Hoefler
Bert Moons
E. Eleftheriou
Bram-Ernst Verhoef
OT
22
7
0
17 Jul 2023
Conditional Adapters: Parameter-efficient Transfer Learning with Fast
  Inference
Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference
Tao Lei
Junwen Bai
Siddhartha Brahma
Joshua Ainslie
Kenton Lee
...
Vincent Zhao
Yuexin Wu
Bo-wen Li
Yu Zhang
Ming-Wei Chang
BDL
AI4CE
22
54
0
11 Apr 2023
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training
  Efficiency
Sparse-IFT: Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
Shreyas Saxena
Abhay Gupta
Sean Lie
28
3
0
21 Mar 2023
Powerpropagation: A sparsity inducing weight reparameterisation
Powerpropagation: A sparsity inducing weight reparameterisation
Jonathan Richard Schwarz
Siddhant M. Jayakumar
Razvan Pascanu
P. Latham
Yee Whye Teh
90
54
0
01 Oct 2021
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
141
684
0
31 Jan 2021
1