ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.15426
43
0

Visual Position Prompt for MLLM based Visual Grounding

19 March 2025
Wei Tang
Yanpeng Sun
Qinying Gu
Zechao Li
    VLM
ArXivPDFHTML
Abstract

Although Multimodal Large Language Models (MLLMs) excel at various image-related tasks, they encounter challenges in precisely aligning coordinates with spatial information within images, particularly in position-aware tasks such as visual grounding. This limitation arises from two key factors. First, MLLMs lack explicit spatial references, making it difficult to associate textual descriptions with precise image locations. Second, their feature extraction processes prioritize global context over fine-grained spatial details, leading to weak localization capability. To address this issue, we introduce VPP-LLaVA, an MLLM equipped with Visual Position Prompt (VPP) to improve its grounding capability. VPP-LLaVA integrates two complementary mechanisms. The global VPP overlays learnable, axis-like embeddings onto the input image to provide structured spatial cues. The local VPP focuses on fine-grained localization by incorporating position-aware queries, which suggests probable object locations. We also introduce a VPP-SFT dataset with 0.6M samples, consolidating high-quality visual grounding data into a compact format for efficient model training. Training on this dataset with VPP enhances the model's performance, achieving state-of-the-art results on standard grounding benchmarks despite using fewer training samples compared to other MLLMs like MiniGPT-v2, which rely on much larger datasets (∼\sim∼21M samples). The code and VPP-SFT dataset will be available atthis https URLupon acceptance.

View on arXiv
@article{tang2025_2503.15426,
  title={ Visual Position Prompt for MLLM based Visual Grounding },
  author={ Wei Tang and Yanpeng Sun and Qinying Gu and Zechao Li },
  journal={arXiv preprint arXiv:2503.15426},
  year={ 2025 }
}
Comments on this paper