ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22375
31
0

Pangu Embedded: An Efficient Dual-system LLM Reasoner with Metacognition

28 May 2025
Hanting Chen
Yasheng Wang
Kai Han
Dong Li
Lin Li
Zhenni Bi
Jinpeng Li
Haoyu Wang
Fei Mi
Mingjian Zhu
Bin Wang
Kaikai Song
Yifei Fu
Xu He
Yu-Juan Luo
Chong Zhu
Quan He
Xueyu Wu
Wei He
Hailin Hu
Yehui Tang
Dacheng Tao
Xinghao Chen
Yunhe Wang
    LRM
ArXivPDFHTML
Abstract

This work presents Pangu Embedded, an efficient Large Language Model (LLM) reasoner developed on Ascend Neural Processing Units (NPUs), featuring flexible fast and slow thinking capabilities. Pangu Embedded addresses the significant computational costs and inference latency challenges prevalent in existing reasoning-optimized LLMs. We propose a two-stage training framework for its construction. In Stage 1, the model is finetuned via an iterative distillation process, incorporating inter-iteration model merging to effectively aggregate complementary knowledge. This is followed by reinforcement learning on Ascend clusters, optimized by a latency-tolerant scheduler that combines stale synchronous parallelism with prioritized data queues. The RL process is guided by a Multi-source Adaptive Reward System (MARS), which generates dynamic, task-specific reward signals using deterministic metrics and lightweight LLM evaluators for mathematics, coding, and general problem-solving tasks. Stage 2 introduces a dual-system framework, endowing Pangu Embedded with a "fast" mode for routine queries and a deeper "slow" mode for complex inference. This framework offers both manual mode switching for user control and an automatic, complexity-aware mode selection mechanism that dynamically allocates computational resources to balance latency and reasoning depth. Experimental results on benchmarks including AIME 2024, GPQA, and LiveCodeBench demonstrate that Pangu Embedded with 7B parameters, outperforms similar-size models like Qwen3-8B and GLM4-9B. It delivers rapid responses and state-of-the-art reasoning quality within a single, unified model architecture, highlighting a promising direction for developing powerful yet practically deployable LLM reasoners.

View on arXiv
@article{chen2025_2505.22375,
  title={ Pangu Embedded: An Efficient Dual-system LLM Reasoner with Metacognition },
  author={ Hanting Chen and Yasheng Wang and Kai Han and Dong Li and Lin Li and Zhenni Bi and Jinpeng Li and Haoyu Wang and Fei Mi and Mingjian Zhu and Bin Wang and Kaikai Song and Yifei Fu and Xu He and Yu Luo and Chong Zhu and Quan He and Xueyu Wu and Wei He and Hailin Hu and Yehui Tang and Dacheng Tao and Xinghao Chen and Yunhe Wang },
  journal={arXiv preprint arXiv:2505.22375},
  year={ 2025 }
}
Comments on this paper