ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.19828
48
1

Analyzing CLIP's Performance Limitations in Multi-Object Scenarios: A Controlled High-Resolution Study

27 February 2025
Reza Abbasi
Ali Nazari
Aminreza Sefid
Mohammadali Banayeeanzade
M. Rohban
M. Baghshah
    VLM
ArXivPDFHTML
Abstract

Contrastive Language-Image Pre-training (CLIP) models have demonstrated remarkable performance in zero-shot classification tasks, yet their efficacy in handling complex multi-object scenarios remains challenging. This study presents a comprehensive analysis of CLIP's performance limitations in multi-object contexts through controlled experiments. We introduce two custom datasets, SimCO and CompCO, to evaluate CLIP's image and text encoders in various multi-object configurations. Our findings reveal significant biases in both encoders: the image encoder favors larger objects, while the text encoder prioritizes objects mentioned first in descriptions. We hypothesize these biases originate from CLIP's training process and provide evidence through analyses of the COCO dataset and CLIP's training progression. Additionally, we extend our investigation to Stable Diffusion models, revealing that biases in the CLIP text encoder significantly impact text-to-image generation tasks. Our experiments demonstrate how these biases affect CLIP's performance in image-caption matching and generation tasks, particularly when manipulating object sizes and their order in captions. This work contributes valuable insights into CLIP's behavior in complex visual environments and highlights areas for improvement in future vision-language models.

View on arXiv
@article{abbasi2025_2502.19828,
  title={ Analyzing CLIP's Performance Limitations in Multi-Object Scenarios: A Controlled High-Resolution Study },
  author={ Reza Abbasi and Ali Nazari and Aminreza Sefid and Mohammadali Banayeeanzade and Mohammad Hossein Rohban and Mahdieh Soleymani Baghshah },
  journal={arXiv preprint arXiv:2502.19828},
  year={ 2025 }
}
Comments on this paper