Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.13444
Cited By
Vision Transformer Compression with Structured Pruning and Low Rank Approximation
25 March 2022
Ankur Kumar
ViT
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Vision Transformer Compression with Structured Pruning and Low Rank Approximation"
7 / 7 papers shown
Title
Research on Personalized Compression Algorithm for Pre-trained Models Based on Homomorphic Entropy Increase
Yicong Li
Xing Guo
Haohua Du
35
0
0
16 Aug 2024
Memory-Efficient Vision Transformers: An Activation-Aware Mixed-Rank Compression Strategy
Seyedarmin Azizi
M. Nazemi
Massoud Pedram
ViT
MQ
38
2
0
08 Feb 2024
Compressing Vision Transformers for Low-Resource Visual Learning
Eric Youn
J. SaiMitheran
Sanjana Prabhu
Siyuan Chen
ViT
27
2
0
05 Sep 2023
Low Rank Optimization for Efficient Deep Learning: Making A Balance between Compact Architecture and Fast Training
Xinwei Ou
Zhangxin Chen
Ce Zhu
Yipeng Liu
31
4
0
22 Mar 2023
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,781
0
24 Feb 2021
What is the State of Neural Network Pruning?
Davis W. Blalock
Jose Javier Gonzalez Ortiz
Jonathan Frankle
John Guttag
191
1,027
0
06 Mar 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
243
4,469
0
23 Jan 2020
1