ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06110
24
0

Multimodal Sentiment Analysis on CMU-MOSEI Dataset using Transformer-based Models

9 May 2025
Jugal Gajjar
Kaustik Ranaware
ArXivPDFHTML
Abstract

This project performs multimodal sentiment analysis using the CMU-MOSEI dataset, using transformer-based models with early fusion to integrate text, audio, and visual modalities. We employ BERT-based encoders for each modality, extracting embeddings that are concatenated before classification. The model achieves strong performance, with 97.87\% 7-class accuracy and a 0.9682 F1-score on the test set, demonstrating the effectiveness of early fusion in capturing cross-modal interactions. The training utilized Adam optimization (lr=1e-4), dropout (0.3), and early stopping to ensure generalization and robustness. Results highlight the superiority of transformer architectures in modeling multimodal sentiment, with a low MAE (0.1060) indicating precise sentiment intensity prediction. Future work may compare fusion strategies or enhance interpretability. This approach utilizes multimodal learning by effectively combining linguistic, acoustic, and visual cues for sentiment analysis.

View on arXiv
@article{gajjar2025_2505.06110,
  title={ Multimodal Sentiment Analysis on CMU-MOSEI Dataset using Transformer-based Models },
  author={ Jugal Gajjar and Kaustik Ranaware },
  journal={arXiv preprint arXiv:2505.06110},
  year={ 2025 }
}
Comments on this paper