ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.11751
57
0

Language Models Can See Better: Visual Contrastive Decoding For LLM Multimodal Reasoning

17 February 2025
Yuqi Pang
Bowen Yang
Haoqin Tu
Yun Cao
Zeyu Zhang
    LRM
    MLLM
ArXivPDFHTML
Abstract

Although Large Language Models (LLMs) excel in reasoning and generation for language tasks, they are not specifically designed for multimodal challenges. Training Multimodal Large Language Models (MLLMs), however, is resource-intensive and constrained by various training limitations. In this paper, we propose the Modular-based Visual Contrastive Decoding (MVCD) framework to move this obstacle. Our framework leverages LLMs' In-Context Learning (ICL) capability and the proposed visual contrastive-example decoding (CED), specifically tailored for this framework, without requiring any additional training. By converting visual signals into text and focusing on contrastive output distributions during decoding, we can highlight the new information introduced by contextual examples, explore their connections, and avoid over-reliance on prior encoded knowledge. MVCD enhances LLMs' visual perception to make it see and reason over the input visuals. To demonstrate MVCD's effectiveness, we conduct experiments with four LLMs across five question answering datasets. Our results not only show consistent improvement in model accuracy but well explain the effective components inside our decoding strategy. Our code will be available atthis https URL.

View on arXiv
@article{pang2025_2502.11751,
  title={ Language Models Can See Better: Visual Contrastive Decoding For LLM Multimodal Reasoning },
  author={ Yuqi Pang and Bowen Yang and Haoqin Tu and Yun Cao and Zeyu Zhang },
  journal={arXiv preprint arXiv:2502.11751},
  year={ 2025 }
}
Comments on this paper