44
0

D.Va: Validate Your Demonstration First Before You Use It

Abstract

In-context learning (ICL) has demonstrated significant potential in enhancing the capabilities of large language models (LLMs) during inference. It's well-established that ICL heavily relies on selecting effective demonstrations to generate outputs that better align with the expected results. As for demonstration selection, previous approaches have typically relied on intuitive metrics to evaluate the effectiveness of demonstrations, which often results in limited robustness and poor cross-model generalization capabilities. To tackle these challenges, we propose a novel method, \textbf{D}emonstration \textbf{VA}lidation (\textbf{this http URL}), which integrates a demonstration validation perspective into this field. By introducing the demonstration validation mechanism, our method effectively identifies demonstrations that are both effective and highly generalizable. \textbf{this http URL} surpasses all existing demonstration selection techniques across both natural language understanding (NLU) and natural language generation (NLG) tasks. Additionally, we demonstrate the robustness and generalizability of our approach across various language models with different retrieval models.

View on arXiv
@article{zhang2025_2502.13646,
  title={ D.Va: Validate Your Demonstration First Before You Use It },
  author={ Qi Zhang and Zhiqing Xiao and Ruixuan Xiao and Lirong Gao and Junbo Zhao },
  journal={arXiv preprint arXiv:2502.13646},
  year={ 2025 }
}
Comments on this paper