ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.10202
  4. Cited By
VQ-Logits: Compressing the Output Bottleneck of Large Language Models via Vector Quantized Logits

VQ-Logits: Compressing the Output Bottleneck of Large Language Models via Vector Quantized Logits

15 May 2025
Jintian Shao
Hongyi Huang
Jiayi Wu
YiMing Cheng
ZhiYu Wu
You Shan
MingKai Zheng
    MQ
ArXivPDFHTML

Papers citing "VQ-Logits: Compressing the Output Bottleneck of Large Language Models via Vector Quantized Logits"

Title
No papers