ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.00060
35
0

SAC-ViT: Semantic-Aware Clustering Vision Transformer with Early Exit

27 February 2025
Youbing Hu
Yun Cheng
Anqi Lu
Dawei Wei
Zhijun Li
ArXivPDFHTML
Abstract

The Vision Transformer (ViT) excels in global modeling but faces deployment challenges on resource-constrained devices due to the quadratic computational complexity of its attention mechanism. To address this, we propose the Semantic-Aware Clustering Vision Transformer (SAC-ViT), a non-iterative approach to enhance ViT's computational efficiency. SAC-ViT operates in two stages: Early Exit (EE) and Semantic-Aware Clustering (SAC). In the EE stage, downsampled input images are processed to extract global semantic information and generate initial inference results. If these results do not meet the EE termination criteria, the information is clustered into target and non-target tokens. In the SAC stage, target tokens are mapped back to the original image, cropped, and embedded. These target tokens are then combined with reused non-target tokens from the EE stage, and the attention mechanism is applied within each cluster. This two-stage design, with end-to-end optimization, reduces spatial redundancy and enhances computational efficiency, significantly boosting overall ViT performance. Extensive experiments demonstrate the efficacy of SAC-ViT, reducing 62% of the FLOPs of DeiT and achieving 1.98 times throughput without compromising performance.

View on arXiv
@article{hu2025_2503.00060,
  title={ SAC-ViT: Semantic-Aware Clustering Vision Transformer with Early Exit },
  author={ Youbing Hu and Yun Cheng and Anqi Lu and Dawei Wei and Zhijun Li },
  journal={arXiv preprint arXiv:2503.00060},
  year={ 2025 }
}
Comments on this paper