ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01163
49
0

Bandit-Based Prompt Design Strategy Selection Improves Prompt Optimizers

3 March 2025
Rin Ashizawa
Yoichi Hirose
Nozomu Yoshinari
Kento Uchida
Shinichi Shirakawa
ArXivPDFHTML
Abstract

Prompt optimization aims to search for effective prompts that enhance the performance of large language models (LLMs). Although existing prompt optimization methods have discovered effective prompts, they often differ from sophisticated prompts carefully designed by human experts. Prompt design strategies, representing best practices for improving prompt performance, can be key to improving prompt optimization. Recently, a method termed the Autonomous Prompt Engineering Toolbox (APET) has incorporated various prompt design strategies into the prompt optimization process. In APET, the LLM is needed to implicitly select and apply the appropriate strategies because prompt design strategies can have negative effects. This implicit selection may be suboptimal due to the limited optimization capabilities of LLMs. This paper introduces Optimizing Prompts with sTrategy Selection (OPTS), which implements explicit selection mechanisms for prompt design. We propose three mechanisms, including a Thompson sampling-based approach, and integrate them into EvoPrompt, a well-known prompt optimizer. Experiments optimizing prompts for two LLMs, Llama-3-8B-Instruct and GPT-4o mini, were conducted using BIG-Bench Hard. Our results show that the selection of prompt design strategies improves the performance of EvoPrompt, and the Thompson sampling-based mechanism achieves the best overall results. Our experimental code is provided atthis https URL.

View on arXiv
@article{ashizawa2025_2503.01163,
  title={ Bandit-Based Prompt Design Strategy Selection Improves Prompt Optimizers },
  author={ Rin Ashizawa and Yoichi Hirose and Nozomu Yoshinari and Kento Uchida and Shinichi Shirakawa },
  journal={arXiv preprint arXiv:2503.01163},
  year={ 2025 }
}
Comments on this paper