ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.01089
  4. Cited By
RPTQ: Reorder-based Post-training Quantization for Large Language Models

RPTQ: Reorder-based Post-training Quantization for Large Language Models

3 April 2023
Zhihang Yuan
Lin Niu
Jia-Wen Liu
Wenyu Liu
Xinggang Wang
Yuzhang Shang
Guangyu Sun
Qiang Wu
Jiaxiang Wu
Bingzhe Wu
    MQ
ArXivPDFHTML

Papers citing "RPTQ: Reorder-based Post-training Quantization for Large Language Models"

3 / 53 papers shown
Title
FlexGen: High-Throughput Generative Inference of Large Language Models
  with a Single GPU
FlexGen: High-Throughput Generative Inference of Large Language Models with a Single GPU
Ying Sheng
Lianmin Zheng
Binhang Yuan
Zhuohan Li
Max Ryabinin
...
Joseph E. Gonzalez
Percy Liang
Christopher Ré
Ion Stoica
Ce Zhang
144
365
0
13 Mar 2023
GLM-130B: An Open Bilingual Pre-trained Model
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng-Zhen Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
242
1,070
0
05 Oct 2022
PTQ-SL: Exploring the Sub-layerwise Post-training Quantization
PTQ-SL: Exploring the Sub-layerwise Post-training Quantization
Zhihang Yuan
Yiqi Chen
Chenhao Xue
Chenguang Zhang
Qiankun Wang
Guangyu Sun
MQ
9
3
0
15 Oct 2021
Previous
12