ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.19647
53
0

Show or Tell? Effectively prompting Vision-Language Models for semantic segmentation

25 March 2025
Niccolo Avogaro
Thomas Frick
Mattia Rigotti
A. Bartezzaghi
Filip Janicki
C. Malossi
Konrad Schindler
Roy Assaf
    MLLM
    VLM
ArXivPDFHTML
Abstract

Large Vision-Language Models (VLMs) are increasingly being regarded as foundation models that can be instructed to solve diverse tasks by prompting, without task-specific training. We examine the seemingly obvious question: how to effectively prompt VLMs for semantic segmentation. To that end, we systematically evaluate the segmentation performance of several recent models guided by either text or visual prompts on the out-of-distribution MESS dataset collection. We introduce a scalable prompting scheme, few-shot prompted semantic segmentation, inspired by open-vocabulary segmentation and few-shot learning. It turns out that VLMs lag far behind specialist models trained for a specific segmentation task, by about 30% on average on the Intersection-over-Union metric. Moreover, we find that text prompts and visual prompts are complementary: each one of the two modes fails on many examples that the other one can solve. Our analysis suggests that being able to anticipate the most effective prompt modality can lead to a 11% improvement in performance. Motivated by our findings, we propose PromptMatcher, a remarkably simple training-free baseline that combines both text and visual prompts, achieving state-of-the-art results outperforming the best text-prompted VLM by 2.5%, and the top visual-prompted VLM by 3.5% on few-shot prompted semantic segmentation.

View on arXiv
@article{avogaro2025_2503.19647,
  title={ Show or Tell? Effectively prompting Vision-Language Models for semantic segmentation },
  author={ Niccolo Avogaro and Thomas Frick and Mattia Rigotti and Andrea Bartezzaghi and Filip Janicki and Cristiano Malossi and Konrad Schindler and Roy Assaf },
  journal={arXiv preprint arXiv:2503.19647},
  year={ 2025 }
}
Comments on this paper