Embracing Collaboration Over Competition: Condensing Multiple Prompts for Visual In-Context Learning

Visual In-Context Learning (VICL) enables adaptively solving vision tasks by leveraging pixel demonstrations, mimicking human-like task completion through analogy. Prompt selection is critical in VICL, but current methods assume the existence of a single "ideal" prompt in a pool of candidates, which in practice may not hold true. Multiple suitable prompts may exist, but individually they often fall short, leading to difficulties in selection and the exclusion of useful context. To address this, we propose a new perspective: prompt condensation. Rather than relying on a single prompt, candidate prompts collaborate to efficiently integrate informative contexts without sacrificing resolution. We devise Condenser, a lightweight external plugin that compresses relevant fine-grained context across multiple prompts. Optimized end-to-end with the backbone, Condenser ensures accurate integration of contextual cues. Experiments demonstrate Condenser outperforms state-of-the-arts across benchmark tasks, showing superior context compression, scalability with more prompts, and enhanced computational efficiency compared to ensemble methods, positioning it as a highly competitive solution for VICL. Code is open-sourced atthis https URL.
View on arXiv@article{wang2025_2504.21263, title={ Embracing Collaboration Over Competition: Condensing Multiple Prompts for Visual In-Context Learning }, author={ Jinpeng Wang and Tianci Luo and Yaohua Zha and Yan Feng and Ruisheng Luo and Bin Chen and Tao Dai and Long Chen and Yaowei Wang and Shu-Tao Xia }, journal={arXiv preprint arXiv:2504.21263}, year={ 2025 } }