12
6

From Introspection to Best Practices: Principled Analysis of Demonstrations in Multimodal In-Context Learning

Nan Xu
Fei Wang
Sheng Zhang
Hoifung Poon
Muhao Chen
Abstract

Motivated by in-context learning (ICL) capabilities of Large Language Models (LLMs), multimodal LLMs with additional visual modality are also exhibited with similar ICL abilities when multiple image-text pairs are provided as demonstrations. However, relatively less work has been done to investigate the principles behind how and why multimodal ICL works. We conduct a systematic and principled evaluation of multimodal ICL for models of different scales on a broad spectrum of new yet critical tasks. Through perturbations over different modality information, we show that modalities matter differently across tasks in multimodal ICL. Guided by task-specific modality impact, we recommend modality-driven demonstration strategies to boost ICL performance. We also find that models may follow inductive biases from multimodal ICL even if they are rarely seen in or contradict semantic priors from pretraining data. Our principled analysis provides a comprehensive way of understanding the role of demonstrations in multimodal in-context learning, and sheds light on effectively improving multimodal ICL on a wide range of tasks.

View on arXiv
@article{xu2025_2407.00902,
  title={ From Introspection to Best Practices: Principled Analysis of Demonstrations in Multimodal In-Context Learning },
  author={ Nan Xu and Fei Wang and Sheng Zhang and Hoifung Poon and Muhao Chen },
  journal={arXiv preprint arXiv:2407.00902},
  year={ 2025 }
}
Comments on this paper