ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.06592
19
0

Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

10 May 2025
H. M. D. Kabir
S. Mondal
Mohammad Ali Moni
ArXivPDFHTML
Abstract

This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository:this http URL

View on arXiv
@article{kabir2025_2505.06592,
  title={ Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning },
  author={ H M Dipu Kabir and Subrota Kumar Mondal and Mohammad Ali Moni },
  journal={arXiv preprint arXiv:2505.06592},
  year={ 2025 }
}
Comments on this paper