Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.01009
Cited By
DynaThink: Fast or Slow? A Dynamic Decision-Making Framework for Large Language Models
1 July 2024
Jiabao Pan
Yan Zhang
Chen Zhang
Zuozhu Liu
Hongwei Wang
Haizhou Li
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DynaThink: Fast or Slow? A Dynamic Decision-Making Framework for Large Language Models"
7 / 7 papers shown
Title
Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs
Jinyan Su
Jennifer Healey
Preslav Nakov
Claire Cardie
LRM
115
0
0
30 Apr 2025
Fast-Slow-Thinking: Complex Task Solving with Large Language Models
Yiliu Sun
Yanfang Zhang
Zicheng Zhao
Sheng Wan
Dacheng Tao
Chen Gong
LRM
33
0
0
11 Apr 2025
ShadowCoT: Cognitive Hijacking for Stealthy Reasoning Backdoors in LLMs
Gejian Zhao
Hanzhou Wu
Xinpeng Zhang
Athanasios V. Vasilakos
LRM
36
1
0
08 Apr 2025
Complexity-Based Prompting for Multi-Step Reasoning
Yao Fu
Hao-Chun Peng
Ashish Sabharwal
Peter Clark
Tushar Khot
ReLM
LRM
162
412
0
03 Oct 2022
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
293
4,077
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,236
0
21 Mar 2022
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Mor Geva
Daniel Khashabi
Elad Segal
Tushar Khot
Dan Roth
Jonathan Berant
RALM
245
671
0
06 Jan 2021
1