ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.05693
  4. Cited By
Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge

Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge

9 December 2023
Xuan Shen
Peiyan Dong
Lei Lu
Zhenglun Kong
Zhengang Li
Ming Lin
Chao Wu
Yanzhi Wang
    MQ
ArXivPDFHTML

Papers citing "Agile-Quant: Activation-Guided Quantization for Faster Inference of LLMs on the Edge"

14 / 14 papers shown
Title
Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey
Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey
J. H. Liu
Yao Du
Kun Yang
Yan Wang
Xiping Hu
Z. Wang
Y. Liu
Peng Sun
Azzedine Boukerche
Victor C.M. Leung
38
0
0
03 May 2025
Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
Softpick: No Attention Sink, No Massive Activations with Rectified Softmax
Zayd Muhammad Kawakibi Zuhri
Erland Hilman Fuadi
Alham Fikri Aji
31
0
0
29 Apr 2025
ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMs
ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMs
Yan Yang
Yixia Li
Hongru Wang
Xuetao Wei
Jianqiao Yu
Yun-Nung Chen
Guanhua Chen
MoMe
26
0
0
17 Apr 2025
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency
Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for Energy Efficiency, Output Accuracy, and Inference Latency
E. J. Husom
Arda Goknil
Merve Astekin
Lwin Khin Shar
Andre Kåsen
S. Sen
Benedikt Andreas Mithassel
Ahmet Soylu
MQ
32
0
0
04 Apr 2025
Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization
Oaken: Fast and Efficient LLM Serving with Online-Offline Hybrid KV Cache Quantization
Minsu Kim
Seongmin Hong
RyeoWook Ko
S. Choi
Hunjong Lee
Junsoo Kim
J. Kim
Jongse Park
57
0
0
24 Mar 2025
QuartDepth: Post-Training Quantization for Real-Time Depth Estimation on the Edge
QuartDepth: Post-Training Quantization for Real-Time Depth Estimation on the Edge
Xuan Shen
Weize Ma
Jing Liu
Changdi Yang
Rui Ding
...
Wei Niu
Yanzhi Wang
Pu Zhao
Jun Lin
Jiuxiang Gu
MQ
52
0
0
20 Mar 2025
Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for
  Quantized LLMs with 100T Training Tokens
Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens
Xu Ouyang
Tao Ge
Thomas Hartvigsen
Zhisong Zhang
Haitao Mi
Dong Yu
MQ
90
3
0
26 Nov 2024
Pruning Foundation Models for High Accuracy without Retraining
Pruning Foundation Models for High Accuracy without Retraining
Pu Zhao
Fei Sun
Xuan Shen
Pinrui Yu
Zhenglun Kong
Yanzhi Wang
Xue Lin
33
10
0
21 Oct 2024
Rethinking Token Reduction for State Space Models
Rethinking Token Reduction for State Space Models
Zheng Zhan
Yushu Wu
Zhenglun Kong
Changdi Yang
Yifan Gong
Xuan Shen
Xue Lin
Pu Zhao
Yanzhi Wang
Mamba
32
4
0
16 Oct 2024
CrossQuant: A Post-Training Quantization Method with Smaller
  Quantization Kernel for Precise Large Language Model Compression
CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression
Wenyuan Liu
Xindian Ma
Peng Zhang
Yan Wang
MQ
29
1
0
10 Oct 2024
Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for
  Large Language Models
Delta-CoMe: Training-Free Delta-Compression with Mixed-Precision for Large Language Models
Bowen Ping
Shuo Wang
Hanqing Wang
Xu Han
Yuzhuang Xu
Yukun Yan
Yun Chen
Baobao Chang
Zhiyuan Liu
Maosong Sun
MQ
43
4
0
13 Jun 2024
Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge
  Devices
Empirical Guidelines for Deploying LLMs onto Resource-constrained Edge Devices
Ruiyang Qin
Dancheng Liu
Zheyu Yan
Zhaoxuan Tan
Zixuan Pan
Zhenge Jia
Meng-Long Jiang
Ahmed Abbasi
Jinjun Xiong
Yiyu Shi
51
10
0
06 Jun 2024
EdgeShard: Efficient LLM Inference via Collaborative Edge Computing
EdgeShard: Efficient LLM Inference via Collaborative Edge Computing
Mingjin Zhang
Jiannong Cao
Xiaoming Shen
Zeyang Cui
31
47
0
23 May 2024
EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for
  the Acceleration of Lightweight LLMs on the Edge
EdgeQAT: Entropy and Distribution Guided Quantization-Aware Training for the Acceleration of Lightweight LLMs on the Edge
Xuan Shen
Zhenglun Kong
Changdi Yang
Zhaoyang Han
Lei Lu
...
Zhihao Shu
Wei Niu
Miriam Leeser
Pu Zhao
Yanzhi Wang
MQ
48
18
0
16 Feb 2024
1