ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.03351
  4. Cited By
VLMQ: Efficient Post-Training Quantization for Large Vision-Language Models via Hessian Augmentation

VLMQ: Efficient Post-Training Quantization for Large Vision-Language Models via Hessian Augmentation

5 August 2025
Yufei Xue
Yushi Huang
Jiawei Shao
Jun Zhang
    MQVLM
ArXiv (abs)PDFHTML

Papers citing "VLMQ: Efficient Post-Training Quantization for Large Vision-Language Models via Hessian Augmentation"

1 / 1 papers shown
Sparse Training Scheme for Multimodal LLM
Sparse Training Scheme for Multimodal LLM
Kean Shi
Liang Chen
Haozhe Zhao
Baobao Chang
109
0
0
16 Sep 2025
1