We introduce Phi-4-Mini and Phi-4-Multimodal, compact yet highly capable language and multimodal models. Phi-4-Mini is a 3.8-billion-parameter language model trained on high-quality web and synthetic data, significantly outperforming recent open-source models of similar size and matching the performance of models twice its size on math and coding tasks requiring complex reasoning. This achievement is driven by a carefully curated synthetic data recipe emphasizing high-quality math and coding datasets. Compared to its predecessor, Phi-3.5-Mini, Phi-4-Mini features an expanded vocabulary size of 200K tokens to better support multilingual applications, as well as group query attention for more efficient long-sequence generation. Phi-4-Multimodal is a multimodal model that integrates text, vision, and speech/audio input modalities into a single model. Its novel modality extension approach leverages LoRA adapters and modality-specific routers to allow multiple inference modes combining various modalities without interference. For example, it now ranks first in the OpenASR leaderboard to date, although the LoRA component of the speech/audio modality has just 460 million parameters. Phi-4-Multimodal supports scenarios involving (vision + language), (vision + speech), and (speech/audio) inputs, outperforming larger vision-language and speech-language models on a wide range of tasks. Additionally, we experiment to further train Phi-4-Mini to enhance its reasoning capabilities. Despite its compact 3.8-billion-parameter size, this experimental version achieves reasoning performance on par with or surpassing significantly larger models, including DeepSeek-R1-Distill-Qwen-7B and DeepSeek-R1-Distill-Llama-8B.
View on arXiv@article{microsoft2025_2503.01743, title={ Phi-4-Mini Technical Report: Compact yet Powerful Multimodal Language Models via Mixture-of-LoRAs }, author={ Microsoft and Abdelrahman Abouelenin and Atabak Ashfaq and Adam Atkinson and Hany Awadalla and Nguyen Bach and Jianmin Bao and Alon Benhaim and Martin Cai and Vishrav Chaudhary and Congcong Chen and Dong Chen and Dongdong Chen and Junkun Chen and Weizhu Chen and Yen-Chun Chen and Yi-ling Chen and Qi Dai and Xiyang Dai and Ruchao Fan and Mei Gao and Min Gao and Amit Garg and Abhishek Goswami and Junheng Hao and Amr Hendy and Yuxuan Hu and Xin Jin and Mahmoud Khademi and Dongwoo Kim and Young Jin Kim and Gina Lee and Jinyu Li and Yunsheng Li and Chen Liang and Xihui Lin and Zeqi Lin and Mengchen Liu and Yang Liu and Gilsinia Lopez and Chong Luo and Piyush Madan and Vadim Mazalov and Arindam Mitra and Ali Mousavi and Anh Nguyen and Jing Pan and Daniel Perez-Becker and Jacob Platin and Thomas Portet and Kai Qiu and Bo Ren and Liliang Ren and Sambuddha Roy and Ning Shang and Yelong Shen and Saksham Singhal and Subhojit Som and Xia Song and Tetyana Sych and Praneetha Vaddamanu and Shuohang Wang and Yiming Wang and Zhenghao Wang and Haibin Wu and Haoran Xu and Weijian Xu and Yifan Yang and Ziyi Yang and Donghan Yu and Ishmam Zabir and Jianwen Zhang and Li Lyna Zhang and Yunan Zhang and Xiren Zhou }, journal={arXiv preprint arXiv:2503.01743}, year={ 2025 } }