25
6

Multimodal Structured Generation: CVPR's 2nd MMFM Challenge Technical Report

Franz Louis Cesista
Abstract

Multimodal Foundation Models (MMFMs) have demonstrated strong performance in both computer vision and natural language processing tasks. However, their performance diminishes in tasks that require a high degree of integration between these modalities, such as document understanding. Moreover, finetuning these models and deploying them requires significantly more compute and more engineering effort than unimodal models. In this work, we present Multimodal Structured Generation, a framework that forces (frozen) MMFMs to produce outputs in a strictly structured format by applying hard constraints directly to the output logits. This approach not only ensures that the model generates parseable outputs that downstream APIs can easily ingest but also allows us to force the model to reason before answering, which significantly boosts performance without the need for expensive fine-tuning. We demonstrate the effectiveness of our method through competitive results in the CVPR 2nd MMFM Challenge, highlighting that carefully designed lightweight engineering can outperform expensive and complicated modeling approaches. All of our scripts, deployment steps, and evaluation results can be accessed inthis https URL

View on arXiv
@article{cesista2025_2406.11403,
  title={ Multimodal Structured Generation: CVPR's 2nd MMFM Challenge Technical Report },
  author={ Franz Louis Cesista },
  journal={arXiv preprint arXiv:2406.11403},
  year={ 2025 }
}
Comments on this paper