ResearchTrend.AI
  • Papers
  • Communities
  • Organizations
  • Events
  • Blog
  • Pricing
  • Feedback
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07900
  4. Cited By
MiniCPM4: Ultra-Efficient LLMs on End Devices
v1v2 (latest)

MiniCPM4: Ultra-Efficient LLMs on End Devices

9 June 2025
MiniCPM Team
Chaojun Xiao
Yuxuan Li
Xu Han
Yuzhuo Bai
Jie Cai
H. Chen
Wentong Chen
Xin Cong
Ganqu Cui
Ning Ding
Shengdan Fan
Yewei Fang
Z. Fu
Wenyu Guan
Yitong Guan
Junshao Guo
Yufeng Han
Bingxiang He
Yuxiang Huang
Cunliang Kong
Cunliang Kong
Siyuan Li
Siyuan Li
Yanghao Li
Yishan Li
Zhen Li
Dan Liu
Zhen Li
Y. Lin
Xiang Long
Quanyu Lu
Yaxi Lu
Peiyan Luo
Hongya Lyu
Litu Ou
Yinxu Pan
Zekai Qu
Qundong Shi
Zijun Song
Jiayuan Su
Zhou Su
Ao Sun
Xianghui Sun
Peijun Tang
Fangzheng Wang
Feng Wang
Peijun Tang
Yudong Wang
Yesai Wu
S. Wang
Jie Xie
Zihao Xie
Y. Yan
Zhenyu Xiao
Kaihuo Zhang
Lei Zhang
L. Zhang
Xueren Zhang
Qixin Xu
H. Vicky Zhao
Weilin Zhao
Lei Zhang
Yuanqian Zhao
Zhi Zheng
Yudi Zhang
Jie Zhou
Wei Zhou
Weilun Zhao
Zixuan Zhou
Zhiyuan Liu
Chuyue Zhou
Ge Zhou
Jie Zhou
Wei Zhou
Yanghao Zhou
Zihan Zhou
Z. Zhou
Zhiyuan Liu
Guoyang Zeng
Chao Jia
Dahai Li
Maosong Sun
    MLLM
ArXiv (abs)PDFHTMLHuggingFace (83 upvotes)

Papers citing "MiniCPM4: Ultra-Efficient LLMs on End Devices"

7 / 7 papers shown
Title
ProxyAttn: Guided Sparse Attention via Representative Heads
ProxyAttn: Guided Sparse Attention via Representative Heads
Yixuan Wang
H. He
Siqi Bao
H. Wu
Haifeng Wang
Qingfu Zhu
Wanxiang Che
0
0
0
29 Sep 2025
VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning
VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning
Yixuan Zhou
Guoyang Zeng
Xin Liu
Xiang Li
Renjie Yu
...
Weiyue Sun
Jiancheng Gui
Kehan Li
Z. Wu
Zhiyuan Liu
0
0
0
29 Sep 2025
Tequila: Trapping-free Ternary Quantization for Large Language Models
Tequila: Trapping-free Ternary Quantization for Large Language Models
Hong Huang
Decheng Wu
Rui Cen
Guanghua Yu
Z. Li
Kai Liu
Jianchen Zhu
Peng Chen
Xue Liu
Dapeng Wu
MQ
0
0
0
28 Sep 2025
Predicting LLM Reasoning Performance with Small Proxy Model
Predicting LLM Reasoning Performance with Small Proxy Model
Woosung Koh
Juyoung Suk
Sungjun Han
Se-Young Yun
Jay Shin
LRMAI4CE
8
0
0
25 Sep 2025
E3RG: Building Explicit Emotion-driven Empathetic Response Generation System with Multimodal Large Language Model
E3RG: Building Explicit Emotion-driven Empathetic Response Generation System with Multimodal Large Language Model
Ronghao Lin
Shuai Shen
Weipeng Hu
Qiaolin He
Aolin Xiong
Li Huang
Haifeng Hu
Y. Tan
24
0
0
18 Aug 2025
iFairy: the First 2-bit Complex LLM with All Parameters in $\{\pm1, \pm i\}$
iFairy: the First 2-bit Complex LLM with All Parameters in {±1,±i}\{\pm1, \pm i\}{±1,±i}
Feiyu Wang
Guoan Wang
Yihao Zhang
S. Wang
Weitao Li
Bokai Huang
Shimao Chen
Z. L. Jiang
Rui Xu
Tong Yang
MQ
78
0
0
07 Aug 2025
AGORA: Incentivizing Group Emergence Capability in LLMs via Group Distillation
AGORA: Incentivizing Group Emergence Capability in LLMs via Group Distillation
Ren Zhuang
Ben Wang
Shuifa Sun
LRM
38
0
0
25 Jul 2025
1