Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.02787
Cited By
Disentangling Logic: The Role of Context in Large Language Model Reasoning Capabilities
4 June 2024
Wenyue Hua
Kaijie Zhu
Lingyao Li
Lizhou Fan
Shuhang Lin
Mingyu Jin
Haochen Xue
Zelong Li
Jindong Wang
Yongfeng Zhang
LRM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Disentangling Logic: The Role of Context in Large Language Model Reasoning Capabilities"
6 / 6 papers shown
Title
InductionBench: LLMs Fail in the Simplest Complexity Class
Wenyue Hua
Tyler Wong
Sun Fei
Liangming Pan
Adam Jardine
William Yang Wang
LRM
37
2
0
20 Feb 2025
TypedThinker: Diversify Large Language Model Reasoning with Typed Thinking
Danqing Wang
Jianxin Ma
Fei Fang
Lei Li
LLMAG
LRM
40
0
0
02 Oct 2024
To CoT or not to CoT? Chain-of-thought helps mainly on math and symbolic reasoning
Zayne Sprague
Fangcong Yin
Juan Diego Rodriguez
Dongwei Jiang
Manya Wadhwa
Prasann Singhal
Xinyu Zhao
Xi Ye
Kyle Mahowald
Greg Durrett
ReLM
LRM
76
79
0
18 Sep 2024
War and Peace (WarAgent): Large Language Model-based Multi-Agent Simulation of World Wars
Wenyue Hua
Lizhou Fan
Lingyao Li
Kai Mei
Jianchao Ji
Yingqiang Ge
Libby Hemphill
Yongfeng Zhang
LM&Ro
LLMAG
105
87
0
28 Nov 2023
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought
Abulhair Saparov
He He
ELM
LRM
ReLM
113
270
0
03 Oct 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
313
8,261
0
28 Jan 2022
1