26
1

CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features

Abstract

Multimodal encoders like CLIP excel in tasks such as zero-shot image classification and cross-modal retrieval. However, they require excessive training data. We propose canonical similarity analysis (CSA), which uses two unimodal encoders to replicate multimodal encoders using limited data. CSA maps unimodal features into a multimodal space, using a new similarity score to retain only the multimodal information. CSA only involves the inference of unimodal encoders and a cubic-complexity matrix decomposition, eliminating the need for extensive GPU-based model training. Experiments show that CSA outperforms CLIP while requiring 50,000×50,000\times fewer multimodal data pairs to bridge the modalities given pre-trained unimodal encoders on ImageNet classification and misinformative news caption detection. CSA surpasses the state-of-the-art method to map unimodal features to multimodal features. We also demonstrate the ability of CSA with modalities beyond image and text, paving the way for future modality pairs with limited paired multimodal data but abundant unpaired unimodal data, such as lidar and text.

View on arXiv
@article{li2025_2410.07610,
  title={ CSA: Data-efficient Mapping of Unimodal Features to Multimodal Features },
  author={ Po-han Li and Sandeep P. Chinchali and Ufuk Topcu },
  journal={arXiv preprint arXiv:2410.07610},
  year={ 2025 }
}
Comments on this paper