ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.03799
  4. Cited By
Prompt Space Optimizing Few-shot Reasoning Success with Large Language
  Models

Prompt Space Optimizing Few-shot Reasoning Success with Large Language Models

6 June 2023
Fobo Shi
Peijun Qing
D. Yang
Nan Wang
Youbo Lei
H. Lu
Xiaodong Lin
Duantengchuan Li
    VLM
    ReLM
    LLMAG
    LRM
ArXivPDFHTML

Papers citing "Prompt Space Optimizing Few-shot Reasoning Success with Large Language Models"

13 / 13 papers shown
Title
Demonstration Notebook: Finding the Most Suited In-Context Learning
  Example from Interactions
Demonstration Notebook: Finding the Most Suited In-Context Learning Example from Interactions
Yiming Tang
Bin Dong
29
0
0
16 Jun 2024
An automatically discovered chain-of-thought prompt generalizes to novel
  models and datasets
An automatically discovered chain-of-thought prompt generalizes to novel models and datasets
Konstantin Hebenstreit
Robert Praas
Louis P. Kiesewetter
Matthias Samwald
LLMAG
LRM
AI4CE
ReLM
57
10
0
04 May 2023
Edit Everything: A Text-Guided Generative System for Images Editing
Edit Everything: A Text-Guided Generative System for Images Editing
Defeng Xie
Ruichen Wang
Jiancang Ma
Chen Chen
H. Lu
D. Yang
Fobo Shi
Xiaodong Lin
DiffM
80
31
0
27 Apr 2023
Susceptibility to Influence of Large Language Models
Susceptibility to Influence of Large Language Models
Lewis D. Griffin
Bennett Kleinberg
Maximilian Mozes
Kimberly T. Mai
Maria Vau
M. Caldwell
Augustine N. Mavor-Parker
45
14
0
10 Mar 2023
Complexity-Based Prompting for Multi-Step Reasoning
Complexity-Based Prompting for Multi-Step Reasoning
Yao Fu
Hao-Chun Peng
Ashish Sabharwal
Peter Clark
Tushar Khot
ReLM
LRM
162
411
0
03 Oct 2022
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
4,048
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,217
0
21 Mar 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,881
0
04 Mar 2022
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason W. Wei
Xuezhi Wang
Dale Schuurmans
Maarten Bosma
Brian Ichter
F. Xia
Ed H. Chi
Quoc Le
Denny Zhou
LM&Ro
LRM
AI4CE
ReLM
315
8,402
0
28 Jan 2022
Paradigm Shift in Natural Language Processing
Paradigm Shift in Natural Language Processing
Tianxiang Sun
Xiangyang Liu
Xipeng Qiu
Xuanjing Huang
116
82
0
26 Sep 2021
Fantastically Ordered Prompts and Where to Find Them: Overcoming
  Few-Shot Prompt Order Sensitivity
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity
Yao Lu
Max Bartolo
Alastair Moore
Sebastian Riedel
Pontus Stenetorp
AILaw
LRM
277
1,114
0
18 Apr 2021
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit
  Reasoning Strategies
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
Mor Geva
Daniel Khashabi
Elad Segal
Tushar Khot
Dan Roth
Jonathan Berant
RALM
245
671
0
06 Jan 2021
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,913
0
31 Dec 2020
1