Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.07093
Cited By
FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation
9 July 2024
Liqun Ma
Mingjie Sun
Zhiqiang Shen
Re-assign community
ArXiv
PDF
HTML
Papers citing
"FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation"
4 / 4 papers shown
Title
Membership Inference Risks in Quantized Models: A Theoretical and Empirical Study
Eric Aubinais
Philippe Formont
Pablo Piantanida
Elisabeth Gassiat
38
0
0
10 Feb 2025
OneBit: Towards Extremely Low-bit Large Language Models
Yuzhuang Xu
Xu Han
Zonghan Yang
Shuo Wang
Qingfu Zhu
Zhiyuan Liu
Weidong Liu
Wanxiang Che
MQ
51
36
0
17 Feb 2024
BiLLM: Pushing the Limit of Post-Training Quantization for LLMs
Wei Huang
Yangdong Liu
Haotong Qin
Ying Li
Shiming Zhang
Xianglong Liu
Michele Magno
Xiaojuan Qi
MQ
77
63
0
06 Feb 2024
Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks
Cuong Pham
Tuan Hoang
Thanh-Toan Do
FedML
MQ
16
13
0
27 Oct 2022
1