ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2508.11582
  4. Cited By
Aware First, Think Less: Dynamic Boundary Self-Awareness Drives Extreme Reasoning Efficiency in Large Language Models

Aware First, Think Less: Dynamic Boundary Self-Awareness Drives Extreme Reasoning Efficiency in Large Language Models

15 August 2025
Qiguang Chen
Dengyun Peng
Jinhao Liu
HuiKang Su
Jiannan Guan
Libo Qin
Wanxiang Che
    LRM
ArXiv (abs)PDFHTMLGithub (9★)

Papers citing "Aware First, Think Less: Dynamic Boundary Self-Awareness Drives Extreme Reasoning Efficiency in Large Language Models"

4 / 4 papers shown
Beware of Reasoning Overconfidence: Pitfalls in the Reasoning Process for Multi-solution Tasks
Jiannan Guan
Qiguang Chen
L. Qin
Dengyun Peng
Jinhao Liu
Liangyu Huo
Jian Xie
Wanxiang Che
LRM
154
0
0
01 Dec 2025
When to Reason: Semantic Router for vLLM
When to Reason: Semantic Router for vLLM
Chen Wang
Xunzhuo Liu
Yuhan Liu
Yue Zhu
Xiangxi Mo
Junchen Jiang
Huamin Chen
LRM
154
0
0
09 Oct 2025
Meta-Awareness Enhances Reasoning Models: Self-Alignment Reinforcement Learning
Meta-Awareness Enhances Reasoning Models: Self-Alignment Reinforcement Learning
Yoonjeon Kim
Doohyuk Jang
Eunho Yang
ReLMAIFinLRM
202
1
0
26 Sep 2025
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models
Yang Sui
Yu-Neng Chuang
Guanchu Wang
Jiamu Zhang
Tianyi Zhang
...
Andrew Wen
Shaochen
Zhong
Hanjie Chen
Helen Zhou
OffRLReLMLRM
750
266
0
20 Mar 2025
1