ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.03048
  4. Cited By
Giving each task what it needs -- leveraging structured sparsity for
  tailored multi-task learning

Giving each task what it needs -- leveraging structured sparsity for tailored multi-task learning

5 June 2024
Richa Upadhyay
Ronald Phlypo
Rajkumar Saini
Marcus Liwicki
    MoE
ArXivPDFHTML

Papers citing "Giving each task what it needs -- leveraging structured sparsity for tailored multi-task learning"

3 / 3 papers shown
Title
Sparsity in Deep Learning: Pruning and growth for efficient inference
  and training in neural networks
Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler
Dan Alistarh
Tal Ben-Nun
Nikoli Dryden
Alexandra Peste
MQ
141
684
0
31 Jan 2021
What is the State of Neural Network Pruning?
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
185
1,027
0
06 Mar 2020
Learning Task Grouping and Overlap in Multi-task Learning
Learning Task Grouping and Overlap in Multi-task Learning
Abhishek Kumar
Hal Daumé
179
524
0
27 Jun 2012
1