ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.02222
2
0

High-Fidelity Differential-information Driven Binary Vision Transformer

3 July 2025
Tian Gao
Zhiyuan Zhang
Kaijie Yin
Xu-Cheng Zhong
Hui Kong
ArXiv (abs)PDFHTML
Main:8 Pages
5 Figures
Bibliography:2 Pages
9 Tables
Appendix:3 Pages
Abstract

The binarization of vision transformers (ViTs) offers a promising approach to addressing the trade-off between high computational/storage demands and the constraints of edge-device deployment. However, existing binary ViT methods often suffer from severe performance degradation or rely heavily on full-precision modules. To address these issues, we propose DIDB-ViT, a novel binary ViT that is highly informative while maintaining the original ViT architecture and computational efficiency. Specifically, we design an informative attention module incorporating differential information to mitigate information loss caused by binarization and enhance high-frequency retention. To preserve the fidelity of the similarity calculations between binary Q and K tensors, we apply frequency decomposition using the discrete Haar wavelet and integrate similarities across different frequencies. Additionally, we introduce an improved RPReLU activation function to restructure the activation distribution, expanding the model's representational capacity. Experimental results demonstrate that our DIDB-ViT significantly outperforms state-of-the-art network quantization methods in multiple ViT architectures, achieving superior image classification and segmentation performance.

View on arXiv
@article{gao2025_2507.02222,
  title={ High-Fidelity Differential-information Driven Binary Vision Transformer },
  author={ Tian Gao and Zhiyuan Zhang and Kaijie Yin and Xu-Cheng Zhong and Hui Kong },
  journal={arXiv preprint arXiv:2507.02222},
  year={ 2025 }
}
Comments on this paper