ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.20080
  4. Cited By
Mixed-precision Supernet Training from Vision Foundation Models using
  Low Rank Adapter

Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter

29 March 2024
Yuiko Sakuma
Masakazu Yoshimura
Junji Otsuka
Atsushi Irie
Takeshi Ohashi
    MQ
ArXivPDFHTML

Papers citing "Mixed-precision Supernet Training from Vision Foundation Models using Low Rank Adapter"

2 / 2 papers shown
Title
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for
  Vision Transformers
PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for Vision Transformers
Zhikai Li
Mengjuan Chen
Junrui Xiao
Qingyi Gu
ViT
MQ
40
31
0
13 Sep 2022
Towards Mixed-Precision Quantization of Neural Networks via Constrained
  Optimization
Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization
Weihan Chen
Peisong Wang
Jian Cheng
MQ
31
60
0
13 Oct 2021
1