72
2

Beyond Prompt Content: Enhancing LLM Performance via Content-Format Integrated Prompt Optimization

Abstract

Large Language Models (LLMs) have shown significant capability across various tasks, with their real-world effectiveness often driven by prompt design. While recent research has focused on optimizing prompt content, the role of prompt formatting, a critical but often overlooked dimension, has received limited systematic investigation. In this paper, we introduce Content-Format Integrated Prompt Optimization (CFPO), an innovative methodology that jointly optimizes both prompt content and formatting through an iterative refinement process. CFPO leverages natural language mutations to explore content variations and employs a dynamic format exploration strategy that systematically evaluates diverse format options. Our extensive evaluations across multiple tasks and open-source LLMs demonstrate that CFPO demonstrates measurable performance improvements compared to content-only optimization methods. This highlights the importance of integrated content-format optimization and offers a practical, model-agnostic approach to enhancing LLM performance. Code is available atthis https URL.

View on arXiv
@article{liu2025_2502.04295,
  title={ Beyond Prompt Content: Enhancing LLM Performance via Content-Format Integrated Prompt Optimization },
  author={ Yuanye Liu and Jiahang Xu and Li Lyna Zhang and Qi Chen and Xuan Feng and Yang Chen and Zhongxin Guo and Yuqing Yang and Peng Cheng },
  journal={arXiv preprint arXiv:2502.04295},
  year={ 2025 }
}
Comments on this paper