ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.18073
  4. Cited By
Squeeze10-LLM: Squeezing LLMs' Weights by 10 Times via a Staged Mixed-Precision Quantization Method

Squeeze10-LLM: Squeezing LLMs' Weights by 10 Times via a Staged Mixed-Precision Quantization Method

24 July 2025
Qingcheng Zhu
Yangyang Ren
L. Yang
Mingbao Lin
Yanjing Li
Sheng Xu
Zichao Feng
Haodong Zhu
Yuguang Yang
Juan Zhang
Runqi Wang
Baochang Zhang
    MQ
ArXiv (abs)PDFHTMLGithub

Papers citing "Squeeze10-LLM: Squeezing LLMs' Weights by 10 Times via a Staged Mixed-Precision Quantization Method"

0 / 0 papers shown

No papers found

Page 1 of 0