Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2407.14562
Cited By
Thought-Like-Pro: Enhancing Reasoning of Large Language Models through Self-Driven Prolog-based Chain-of-Thought
18 July 2024
Xiaoyu Tan
Yongxin Deng
Xihe Qiu
Weidi Xu
Chao Qu
Wei Chu
Yinghui Xu
Yuan Qi
LRM
AI4CE
LM&Ro
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Thought-Like-Pro: Enhancing Reasoning of Large Language Models through Self-Driven Prolog-based Chain-of-Thought"
3 / 3 papers shown
Title
Challenges and Contributing Factors in the Utilization of Large Language Models (LLMs)
Xiaoliang Chen
Liangbin Li
Le Chang
Yunhe Huang
Yuxuan Zhao
Yuxiao Zhang
Dinuo Li
14
1
0
20 Oct 2023
Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought
Abulhair Saparov
He He
ELM
LRM
ReLM
116
270
0
03 Oct 2022
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
1