Enhancing Sentiment Analysis through Multimodal Fusion: A BERT-DINOv2 Approach
Multimodal sentiment analysis enhances conventional sentiment analysis, which traditionally relies solely on text, by incorporating information from different modalities such as images, text, and audio. This paper proposes a novel multimodal sentiment analysis architecture that integrates text and image data to provide a more comprehensive understanding of sentiments. For text feature extraction, we utilize BERT, a natural language processing model. For image feature extraction, we employ DINOv2, a vision-transformer-based model. The textual and visual latent features are integrated using proposed fusion techniques, namely the Basic Fusion Model, Self Attention Fusion Model, and Dual Attention Fusion Model. Experiments on three datasets, Memotion 7k dataset, MVSA single dataset, and MVSA multi dataset, demonstrate the viability and practicality of the proposed multimodal architecture.
View on arXiv@article{zhao2025_2503.07943, title={ Enhancing Sentiment Analysis through Multimodal Fusion: A BERT-DINOv2 Approach }, author={ Taoxu Zhao and Meisi Li and Kehao Chen and Liye Wang and Xucheng Zhou and Kunal Chaturvedi and Mukesh Prasad and Ali Anaissi and Ali Braytee }, journal={arXiv preprint arXiv:2503.07943}, year={ 2025 } }