ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.10276
  4. Cited By
Revisiting OPRO: The Limitations of Small-Scale LLMs as Optimizers

Revisiting OPRO: The Limitations of Small-Scale LLMs as Optimizers

16 May 2024
Tuo Zhang
Jinyue Yuan
A. Avestimehr
    LRM
ArXivPDFHTML

Papers citing "Revisiting OPRO: The Limitations of Small-Scale LLMs as Optimizers"

4 / 4 papers shown
Title
Efficient Prompting Methods for Large Language Models: A Survey
Efficient Prompting Methods for Large Language Models: A Survey
Kaiyan Chang
Songcheng Xu
Chenglong Wang
Yingfeng Luo
Tong Xiao
Jingbo Zhu
LRM
24
1
0
01 Apr 2024
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
198
283
0
03 May 2023
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
291
2,712
0
24 May 2022
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLM
BDL
LRM
AI4CE
297
3,163
0
21 Mar 2022
1