ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23365
60
0

MCFNet: A Multimodal Collaborative Fusion Network for Fine-Grained Semantic Classification

29 May 2025
Yang Qiao
Xiaoyu Zhong
Xiaofeng Gu
Zhiguo Yu
ArXiv (abs)PDFHTML
Main:10 Pages
8 Figures
Bibliography:2 Pages
1 Tables
Abstract

Multimodal information processing has become increasingly important for enhancing image classification performance. However, the intricate and implicit dependencies across different modalities often hinder conventional methods from effectively capturing fine-grained semantic interactions, thereby limiting their applicability in high-precision classification tasks. To address this issue, we propose a novel Multimodal Collaborative Fusion Network (MCFNet) designed for fine-grained classification. The proposed MCFNet architecture incorporates a regularized integrated fusion module that improves intra-modal feature representation through modality-specific regularization strategies, while facilitating precise semantic alignment via a hybrid attention mechanism. Additionally, we introduce a multimodal decision classification module, which jointly exploits inter-modal correlations and unimodal discriminative features by integrating multiple loss functions within a weighted voting paradigm. Extensive experiments and ablation studies on benchmark datasets demonstrate that the proposed MCFNet framework achieves consistent improvements in classification accuracy, confirming its effectiveness in modeling subtle cross-modal semantics.

View on arXiv
@article{qiao2025_2505.23365,
  title={ MCFNet: A Multimodal Collaborative Fusion Network for Fine-Grained Semantic Classification },
  author={ Yang Qiao and Xiaoyu Zhong and Xiaofeng Gu and Zhiguo Yu },
  journal={arXiv preprint arXiv:2505.23365},
  year={ 2025 }
}
Comments on this paper