ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.13652
28
1

OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition

20 September 2024
Stephen Zhang
V. Papyan
    VLM
ArXivPDFHTML
Abstract

The recent paradigm shift to large-scale foundation models has brought about a new era for deep learning that, while has found great success in practice, has also been plagued by prohibitively expensive costs in terms of high memory consumption and compute. To mitigate these issues, there has been a concerted effort in post-hoc neural network pruning techniques that do not require costly retraining. Despite the considerable progress being made, existing methods often exhibit a steady drop in model performance as the compression increases. In this paper, we present a novel approach to compressing large transformers, coined OATS, that utilizes the second moment information in the input embeddings to decompose the model weights into a sum of sparse and low-rank matrices. Without any retraining, OATS achieves state-of-the-art performance when compressing models by up to 60%60\%60% on large language models such as Llama-3 and Phi-3 and vision transformers such as ViT and DINOv2 while delivering up to 1.37×1.37\times1.37× the CPU acceleration versus a model that was comparably pruned.

View on arXiv
@article{zhang2025_2409.13652,
  title={ OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition },
  author={ Stephen Zhang and Vardan Papyan },
  journal={arXiv preprint arXiv:2409.13652},
  year={ 2025 }
}
Comments on this paper