Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.08155
Cited By
QuantMoE-Bench: Examining Post-Training Quantization for Mixture-of-Experts
12 June 2024
Pingzhi Li
Xiaolong Jin
Yu Cheng
Tianlong Chen
Tianlong Chen
MQ
MoE
Re-assign community
ArXiv
PDF
HTML
Papers citing
"QuantMoE-Bench: Examining Post-Training Quantization for Mixture-of-Experts"
4 / 4 papers shown
Title
Foot-In-The-Door: A Multi-turn Jailbreak for LLMs
Zixuan Weng
Xiaolong Jin
Jinyuan Jia
X. Zhang
AAML
49
0
0
27 Feb 2025
Scaling Laws for Fine-Grained Mixture of Experts
Jakub Krajewski
Jan Ludziejewski
Kamil Adamczewski
Maciej Pióro
Michal Krutul
...
Krystian Król
Tomasz Odrzygó'zd'z
Piotr Sankowski
Marek Cygan
Sebastian Jaszczur
MoE
40
53
0
12 Feb 2024
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,424
0
23 Jan 2020
Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism
M. Shoeybi
M. Patwary
Raul Puri
P. LeGresley
Jared Casper
Bryan Catanzaro
MoE
243
1,791
0
17 Sep 2019
1