Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.05095
Cited By
Less is More: Removing Text-regions Improves CLIP Training Efficiency and Robustness
8 May 2023
Liangliang Cao
Bowen Zhang
Chen Chen
Yinfei Yang
Xianzhi Du
Wen‐Cheng Zhang
Zhiyun Lu
Yantao Zheng
CLIP
VLM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Less is More: Removing Text-regions Improves CLIP Training Efficiency and Robustness"
5 / 5 papers shown
Title
Efficient Self-Supervised Learning for Earth Observation via Dynamic Dataset Curation
Thomas Kerdreux
A. Tuel
Quentin Febvre
A. Mouche
Bertrand Chapron
73
0
0
09 Apr 2025
PreSTU: Pre-Training for Scene-Text Understanding
Jihyung Kil
Soravit Changpinyo
Xi Chen
Hexiang Hu
Sebastian Goodman
Wei-Lun Chao
Radu Soricut
VLM
123
29
0
12 Sep 2022
TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models
Minghao Li
Tengchao Lv
Jingye Chen
Lei Cui
Yijuan Lu
D. Florêncio
Cha Zhang
Zhoujun Li
Furu Wei
ViT
90
214
0
21 Sep 2021
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts
Soravit Changpinyo
P. Sharma
Nan Ding
Radu Soricut
VLM
273
845
0
17 Feb 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
2,875
0
11 Feb 2021
1