101

Enhancing Multimodal Retrieval via Complementary Information Extraction and Alignment

Annual Meeting of the Association for Computational Linguistics (ACL), 2026
Delong Zeng
Yuexiang Xie
Yaliang Li
Ying Shen
Main:8 Pages
3 Figures
Bibliography:3 Pages
9 Tables
Appendix:3 Pages
Abstract

Multimodal retrieval has emerged as a promising yet challenging research direction in recent years. Most existing studies in multimodal retrieval focus on capturing information in multimodal data that is similar to their paired texts, but often ignores the complementary information contained in multimodal data. In this study, we propose CIEA, a novel multimodal retrieval approach that employs Complementary Information Extraction and Alignment, which transforms both text and images in documents into a unified latent space and features a complementary information extractor designed to identify and preserve differences in the image representations. We optimize CIEA using two complementary contrastive losses to ensure semantic integrity and effectively capture the complementary information contained in images. Extensive experiments demonstrate the effectiveness of CIEA, which achieves significant improvements over both divide-and-conquer models and universal dense retrieval models. We provide an ablation study, further discussions, and case studies to highlight the advancements achieved by CIEA. To promote further research in the community, we have released the source code atthis https URL.

View on arXiv
Comments on this paper