ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23495
33
0

Embedding Shift Dissection on CLIP: Effects of Augmentations on VLM's Representation Learning

30 March 2025
Ashim Dahal
Saydul Akbar Murad
Nick Rahimi
    VLM
ArXivPDFHTML
Abstract

Understanding the representation shift on Vision Language Models like CLIP under different augmentations provides valuable insights on Mechanistic Interpretability. In this study, we show the shift on CLIP's embeddings on 9 common augmentation techniques: noise, blur, color jitter, scale and rotate, flip, elastic and perspective transforms, random brightness and contrast, and coarse dropout of pixel blocks. We scrutinize the embedding shifts under similarity on attention map, patch, edge, detail preservation, cosine similarity, L2 distance, pairwise distance and dendrogram clusters and provide qualitative analysis on sample images. Our findings suggest certain augmentations like noise, perspective transform and shift scaling have higher degree of drastic impact on embedding shift. This study provides a concrete foundation for future work on VLM's robustness for mechanical interpretation and adversarial data defense. The code implementation for this study can be found on \href{this https URL}{this https URL}.

View on arXiv
@article{dahal2025_2503.23495,
  title={ Embedding Shift Dissection on CLIP: Effects of Augmentations on VLM's Representation Learning },
  author={ Ashim Dahal and Saydul Akbar Murad and Nick Rahimi },
  journal={arXiv preprint arXiv:2503.23495},
  year={ 2025 }
}
Comments on this paper