Decoupled Multimodal Prototypes for Visual Recognition with Missing Modalities

Multimodal learning enhances deep learning models by enabling them to perceive and understand information from multiple data modalities, such as visual and textual inputs. However, most existing approaches assume the availability of all modalities, an assumption that often fails in real-world applications. Recent works have introduced learnable missing-case-aware prompts to mitigate performance degradation caused by missing modalities while reducing the need for extensive model fine-tuning. Building upon the effectiveness of missing-case-aware handling for missing modalities, we propose a novel decoupled prototype-based output head, which leverages missing-case-aware class-wise prototypes tailored for each individual modality. This approach dynamically adapts to different missing modality scenarios and can be seamlessly integrated with existing prompt-based methods. Extensive experiments demonstrate that our proposed output head significantly improves performance across a wide range of missing-modality scenarios and varying missing rates.
View on arXiv@article{lu2025_2505.08283, title={ Decoupled Multimodal Prototypes for Visual Recognition with Missing Modalities }, author={ Jueqing Lu and Yuanyuan Qi and Xiaohao Yang and Shujie Zhou and Lan Du }, journal={arXiv preprint arXiv:2505.08283}, year={ 2025 } }