ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.18051
  4. Cited By
ViT-1.58b: Mobile Vision Transformers in the 1-bit Era

ViT-1.58b: Mobile Vision Transformers in the 1-bit Era

26 June 2024
Zhengqing Yuan
Rong-Er Zhou
Hongyi Wang
Lifang He
Yanfang Ye
Lichao Sun
    MQ
ArXivPDFHTML

Papers citing "ViT-1.58b: Mobile Vision Transformers in the 1-bit Era"

6 / 6 papers shown
Title
Membership Inference Risks in Quantized Models: A Theoretical and Empirical Study
Eric Aubinais
Philippe Formont
Pablo Piantanida
Elisabeth Gassiat
38
0
0
10 Feb 2025
Efficient Ternary Weight Embedding Model: Bridging Scalability and
  Performance
Efficient Ternary Weight Embedding Model: Bridging Scalability and Performance
Jiayi Chen
Chen Wu
S. Zhang
Nan Li
L. Zhang
Qi Zhang
64
0
0
23 Nov 2024
An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks
An Efficient Matrix Multiplication Algorithm for Accelerating Inference in Binary and Ternary Neural Networks
Mohsen Dehghankar
Mahdi Erfanian
Abolfazl Asudeh
32
0
0
10 Nov 2024
Mixed Non-linear Quantization for Vision Transformers
Mixed Non-linear Quantization for Vision Transformers
Gihwan Kim
Jemin Lee
Sihyeong Park
Yongin Kwon
Hyungshin Kim
MQ
28
0
0
26 Jul 2024
ViTKD: Practical Guidelines for ViT feature knowledge distillation
ViTKD: Practical Guidelines for ViT feature knowledge distillation
Zhendong Yang
Zhe Li
Ailing Zeng
Zexian Li
Chun Yuan
Yu Li
81
42
0
06 Sep 2022
I-ViT: Integer-only Quantization for Efficient Vision Transformer
  Inference
I-ViT: Integer-only Quantization for Efficient Vision Transformer Inference
Zhikai Li
Qingyi Gu
MQ
44
94
0
04 Jul 2022
1