ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.09101
  4. Cited By
Towards A Unified View of Answer Calibration for Multi-Step Reasoning

Towards A Unified View of Answer Calibration for Multi-Step Reasoning

15 November 2023
Shumin Deng
Ningyu Zhang
Nay Oo
Bryan Hooi
    LRM
ArXivPDFHTML

Papers citing "Towards A Unified View of Answer Calibration for Multi-Step Reasoning"

9 / 9 papers shown
Title
Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by
  Imitating Human Thought Processes
Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by Imitating Human Thought Processes
Yezeng Chen
Zui Chen
Yi Zhou
LRM
18
2
0
23 Feb 2024
Self-prompted Chain-of-Thought on Large Language Models for Open-domain
  Multi-hop Reasoning
Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning
Jinyuan Wang
Junlong Li
Hai Zhao
LRM
ReLM
29
20
0
20 Oct 2023
FireAct: Toward Language Agent Fine-tuning
FireAct: Toward Language Agent Fine-tuning
Baian Chen
Chang Shu
Ehsan Shareghi
Nigel Collier
Karthik Narasimhan
Shunyu Yao
ALM
LLMAG
96
96
0
09 Oct 2023
Design of Chain-of-Thought in Math Problem Solving
Design of Chain-of-Thought in Math Problem Solving
Zhanming Jie
Trung Quoc Luong
Xinbo Zhang
Xiaoran Jin
Hang Li
LRM
21
11
0
20 Sep 2023
SCOTT: Self-Consistent Chain-of-Thought Distillation
SCOTT: Self-Consistent Chain-of-Thought Distillation
Jamie Yap
Zhengyang Wang
Zheng Li
K. Lynch
Bing Yin
Xiang Ren
LRM
57
91
0
03 May 2023
Learning to Reason and Memorize with Self-Notes
Learning to Reason and Memorize with Self-Notes
Jack Lanchantin
Shubham Toshniwal
Jason Weston
Arthur Szlam
Sainbayar Sukhbaatar
ReLM
LRM
LLMAG
85
27
0
01 May 2023
Complexity-Based Prompting for Multi-Step Reasoning
Complexity-Based Prompting for Multi-Step Reasoning
Yao Fu
Hao-Chun Peng
Ashish Sabharwal
Peter Clark
Tushar Khot
ReLM
LRM
152
298
0
03 Oct 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
1