ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.16671
  4. Cited By
Demystifying CLIP Data

Demystifying CLIP Data

31 December 2024
Hu Xu
Saining Xie
Xiaoqing Ellen Tan
Po-Yao (Bernie) Huang
Russell Howes
Vasu Sharma
Shang-Wen Li
Gargi Ghosh
Luke Zettlemoyer
Christoph Feichtenhofer
    VLM
    CLIP
ArXivPDFHTML

Papers citing "Demystifying CLIP Data"

37 / 87 papers shown
Title
Few-Shot Recognition via Stage-Wise Retrieval-Augmented Finetuning
Few-Shot Recognition via Stage-Wise Retrieval-Augmented Finetuning
Tian Liu
Huixin Zhang
Shubham Parashar
Shu Kong
24
4
0
17 Jun 2024
What If We Recaption Billions of Web Images with LLaMA-3?
What If We Recaption Billions of Web Images with LLaMA-3?
Xianhang Li
Haoqin Tu
Mude Hui
Zeyu Wang
Bingchen Zhao
...
Jieru Mei
Qing Liu
Huangjie Zheng
Yuyin Zhou
Cihang Xie
VLM
MLLM
36
35
0
12 Jun 2024
Labeled Data Selection for Category Discovery
Labeled Data Selection for Category Discovery
Bingchen Zhao
Nico Lang
Serge J. Belongie
Oisin Mac Aodha
28
3
0
07 Jun 2024
Generalization Beyond Data Imbalance: A Controlled Study on CLIP for
  Transferable Insights
Generalization Beyond Data Imbalance: A Controlled Study on CLIP for Transferable Insights
Xin Wen
Bingchen Zhao
Yilun Chen
Jiangmiao Pang
Xiaojuan Qi
30
3
0
31 May 2024
Don't drop your samples! Coherence-aware training benefits Conditional diffusion
Don't drop your samples! Coherence-aware training benefits Conditional diffusion
Nicolas Dufour
Victor Besnier
Vicky Kalogeiton
David Picard
DiffM
49
2
0
30 May 2024
WIDIn: Wording Image for Domain-Invariant Representation in
  Single-Source Domain Generalization
WIDIn: Wording Image for Domain-Invariant Representation in Single-Source Domain Generalization
Jiawei Ma
Yulei Niu
Shiyuan Huang
G. Han
Shih-Fu Chang
VLM
27
1
0
28 May 2024
ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal
  Models
ConvLLaVA: Hierarchical Backbones as Visual Encoder for Large Multimodal Models
Chunjiang Ge
Sijie Cheng
Ziming Wang
Jiale Yuan
Yuan Gao
Jun Song
Shiji Song
Gao Huang
Bo Zheng
MLLM
VLM
26
17
0
24 May 2024
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based
  Approach
Automatic Data Curation for Self-Supervised Learning: A Clustering-Based Approach
Huy V. Vo
Vasil Khalidov
Timothée Darcet
Théo Moutakanni
Nikita Smetanin
...
Maxime Oquab
Armand Joulin
Hervé Jégou
Patrick Labatut
Piotr Bojanowski
SSL
53
18
0
24 May 2024
No Filter: Cultural and Socioeconomic Diversity in Contrastive
  Vision-Language Models
No Filter: Cultural and Socioeconomic Diversity in Contrastive Vision-Language Models
Angeline Pouget
Lucas Beyer
Emanuele Bugliarello
Xiao Wang
Andreas Steiner
Xiao-Qi Zhai
Ibrahim M. Alabdulmohsin
VLM
31
7
0
22 May 2024
SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure
  Interpretation
SciFIBench: Benchmarking Large Multimodal Models for Scientific Figure Interpretation
Jonathan Roberts
Kai Han
N. Houlsby
Samuel Albanie
40
12
0
14 May 2024
Understanding Retrieval-Augmented Task Adaptation for Vision-Language
  Models
Understanding Retrieval-Augmented Task Adaptation for Vision-Language Models
Yifei Ming
Yixuan Li
VLM
28
7
0
02 May 2024
Modeling Caption Diversity in Contrastive Vision-Language Pretraining
Modeling Caption Diversity in Contrastive Vision-Language Pretraining
Samuel Lavoie
Polina Kirichenko
Mark Ibrahim
Mahmoud Assran
Andrew Gordon Wilson
Aaron Courville
Nicolas Ballas
CLIP
VLM
59
19
0
30 Apr 2024
OpenStreetView-5M: The Many Roads to Global Visual Geolocation
OpenStreetView-5M: The Many Roads to Global Visual Geolocation
Guillaume Astruc
Nicolas Dufour
Ioannis Siglidis
Constantin Aronssohn
Nacim Bouia
...
Charles Raude
Elliot Vincent
Lintao Xu
Hongyu Zhou
Loic Landrieu
27
6
0
29 Apr 2024
HYPE: Hyperbolic Entailment Filtering for Underspecified Images and
  Texts
HYPE: Hyperbolic Entailment Filtering for Underspecified Images and Texts
Wonjae Kim
Sanghyuk Chun
Taekyung Kim
Dongyoon Han
Sangdoo Yun
39
7
0
26 Apr 2024
Omniview-Tuning: Boosting Viewpoint Invariance of Vision-Language
  Pre-training Models
Omniview-Tuning: Boosting Viewpoint Invariance of Vision-Language Pre-training Models
Shouwei Ruan
Yinpeng Dong
Hanqing Liu
Yao Huang
Hang Su
Xingxing Wei
VLM
45
1
0
18 Apr 2024
Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Models
Two Effects, One Trigger: On the Modality Gap, Object Bias, and Information Imbalance in Contrastive Vision-Language Models
Simon Schrodi
David T. Hoffmann
Max Argus
Volker Fischer
Thomas Brox
VLM
50
0
0
11 Apr 2024
DeiT-LT Distillation Strikes Back for Vision Transformer Training on
  Long-Tailed Datasets
DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Harsh Rangwani
Pradipto Mondal
Mayank Mishra
Ashish Ramayee Asokan
R. V. Babu
21
8
0
03 Apr 2024
Garment3DGen: 3D Garment Stylization and Texture Generation
Garment3DGen: 3D Garment Stylization and Texture Generation
N. Sarafianos
Tuur Stuyck
Xiaoyu Xiang
Yilei Li
Jovan Popovic
Rakesh Ranjan
3DH
102
17
0
27 Mar 2024
DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning
DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning
Jonathan Lebensold
Maziar Sanjabi
Pietro Astolfi
Adriana Romero Soriano
Kamalika Chaudhuri
Mike Rabbat
Chuan Guo
DiffM
21
4
0
21 Mar 2024
Towards Multimodal In-Context Learning for Vision & Language Models
Towards Multimodal In-Context Learning for Vision & Language Models
Sivan Doveh
Shaked Perek
M. Jehanzeb Mirza
Wei Lin
Amit Alfassy
Assaf Arbelle
S. Ullman
Leonid Karlinsky
VLM
110
14
0
19 Mar 2024
Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs
Meta-Prompting for Automating Zero-shot Visual Recognition with LLMs
M. Jehanzeb Mirza
Leonid Karlinsky
Wei Lin
Sivan Doveh
Jakub Micorek
Mateusz Koziñski
Hilde Kuhene
Horst Possegger
VLM
MLLM
39
13
0
18 Mar 2024
Improving Medical Multi-modal Contrastive Learning with Expert
  Annotations
Improving Medical Multi-modal Contrastive Learning with Expert Annotations
Yogesh Kumar
Pekka Marttinen
MedIm
VLM
29
10
0
15 Mar 2024
Language-Driven Visual Consensus for Zero-Shot Semantic Segmentation
Language-Driven Visual Consensus for Zero-Shot Semantic Segmentation
Zicheng Zhang
Tong Zhang
Yi Zhu
Jian-zhuo Liu
Xiaodan Liang
QiXiang Ye
Wei Ke
VLM
44
2
0
13 Mar 2024
FocusCLIP: Multimodal Subject-Level Guidance for Zero-Shot Transfer in
  Human-Centric Tasks
FocusCLIP: Multimodal Subject-Level Guidance for Zero-Shot Transfer in Human-Centric Tasks
Muhammad Gul Zain Ali Khan
Muhammad Ferjad Naeem
F. Tombari
Luc Van Gool
Didier Stricker
Muhammad Zeshan Afzal
VLM
CLIP
33
3
0
11 Mar 2024
Finetuned Multimodal Language Models Are High-Quality Image-Text Data
  Filters
Finetuned Multimodal Language Models Are High-Quality Image-Text Data Filters
Weizhi Wang
Khalil Mrini
Linjie Yang
Sateesh Kumar
Yu Tian
Xifeng Yan
Heng Wang
38
16
0
05 Mar 2024
Exploring the Frontier of Vision-Language Models: A Survey of Current
  Methodologies and Future Directions
Exploring the Frontier of Vision-Language Models: A Survey of Current Methodologies and Future Directions
Akash Ghosh
Arkadeep Acharya
Sriparna Saha
Vinija Jain
Aman Chadha
VLM
49
25
0
20 Feb 2024
Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model
  via Cross-modal Alignment
Mind the Modality Gap: Towards a Remote Sensing Vision-Language Model via Cross-modal Alignment
Angelos Zavras
Dimitrios Michail
Begum Demir
Ioannis Papoutsis
VLM
27
11
0
15 Feb 2024
An Empirical Study Into What Matters for Calibrating Vision-Language
  Models
An Empirical Study Into What Matters for Calibrating Vision-Language Models
Weijie Tu
Weijian Deng
Dylan Campbell
Stephen Gould
Tom Gedeon
VLM
33
7
0
12 Feb 2024
Careful with that Scalpel: Improving Gradient Surgery with an EMA
Careful with that Scalpel: Improving Gradient Surgery with an EMA
Yu-Guan Hsieh
James Thornton
Eugène Ndiaye
Michal Klein
Marco Cuturi
Pierre Ablin
MedIm
31
0
0
05 Feb 2024
LHRS-Bot: Empowering Remote Sensing with VGI-Enhanced Large Multimodal
  Language Model
LHRS-Bot: Empowering Remote Sensing with VGI-Enhanced Large Multimodal Language Model
Dilxat Muhtar
Zhenshi Li
Feng-Xue Gu
Xue-liang Zhang
P. Xiao
67
48
0
04 Feb 2024
On Catastrophic Inheritance of Large Foundation Models
On Catastrophic Inheritance of Large Foundation Models
Hao Chen
Bhiksha Raj
Xing Xie
Jindong Wang
AI4CE
51
12
0
02 Feb 2024
SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
Hasan Hammoud
Hani Itani
Fabio Pizzati
Philip H. S. Torr
Adel Bibi
Bernard Ghanem
CLIP
VLM
112
36
0
02 Feb 2024
Computer Vision for Primate Behavior Analysis in the Wild
Computer Vision for Primate Behavior Analysis in the Wild
Richard Vogg
Timo Lüddecke
Jonathan Henrich
Sharmita Dey
Matthias Nuske
...
Alexander Gail
Stefan Treue
H. Scherberger
F. Worgotter
Alexander S. Ecker
28
3
0
29 Jan 2024
Low-Resource Vision Challenges for Foundation Models
Low-Resource Vision Challenges for Foundation Models
Yunhua Zhang
Hazel Doughty
Cees G. M. Snoek
VLM
22
5
0
09 Jan 2024
Active Open-Vocabulary Recognition: Let Intelligent Moving Mitigate CLIP
  Limitations
Active Open-Vocabulary Recognition: Let Intelligent Moving Mitigate CLIP Limitations
Lei Fan
Jianxiong Zhou
Xiaoying Xing
Ying Wu
VLM
30
3
0
28 Nov 2023
VeCLIP: Improving CLIP Training via Visual-enriched Captions
VeCLIP: Improving CLIP Training via Visual-enriched Captions
Zhengfeng Lai
Haotian Zhang
Bowen Zhang
Wentao Wu
Haoping Bai
...
Zhe Gan
Jiulong Shan
Chen-Nee Chuah
Yinfei Yang
Meng Cao
CLIP
VLM
29
28
0
11 Oct 2023
Data Filtering Networks
Data Filtering Networks
Alex Fang
Albin Madappally Jose
Amit Jain
Ludwig Schmidt
Alexander Toshev
Vaishaal Shankar
CLIP
32
124
0
29 Sep 2023
Previous
12