Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability

The task of identifying multimodal image-text representations has garnered increasing attention, particularly with models such as CLIP (Contrastive Language-Image Pretraining), which demonstrate exceptional performance in learning complex associations between images and text. Despite these advancements, ensuring the interpretability of such models is paramount for their safe deployment in real-world applications, such as healthcare. While numerous interpretability methods have been developed for unimodal tasks, these approaches often fail to transfer effectively to multimodal contexts due to inherent differences in the representation structures. Bottleneck methods, well-established in information theory, have been applied to enhance CLIP's interpretability. However, they are often hindered by strong assumptions or intrinsic randomness. To overcome these challenges, we propose the Narrowing Information Bottleneck Theory, a novel framework that fundamentally redefines the traditional bottleneck approach. This theory is specifically designed to satisfy contemporary attribution axioms, providing a more robust and reliable solution for improving the interpretability of multimodal models. In our experiments, compared to state-of-the-art methods, our approach enhances image interpretability by an average of 9%, text interpretability by an average of 58.83%, and accelerates processing speed by 63.95%. Our code is publicly accessible atthis https URL.
View on arXiv@article{zhu2025_2502.14889, title={ Narrowing Information Bottleneck Theory for Multimodal Image-Text Representations Interpretability }, author={ Zhiyu Zhu and Zhibo Jin and Jiayu Zhang and Nan Yang and Jiahao Huang and Jianlong Zhou and Fang Chen }, journal={arXiv preprint arXiv:2502.14889}, year={ 2025 } }