Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.05551
Cited By
OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data
11 September 2023
Giuseppe Cartella
Alberto Baldrati
Davide Morelli
Marcella Cornia
Marco Bertini
Rita Cucchiara
VLM
CLIP
Re-assign community
ArXiv
PDF
HTML
Papers citing
"OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data"
6 / 6 papers shown
Title
Seeing the Abstract: Translating the Abstract Language for Vision Language Models
Davide Talon
Federico Girella
Ziyue Liu
Marco Cristani
Yiming Wang
VLM
52
0
0
06 May 2025
Learning-To-Rank Approach for Identifying Everyday Objects Using a Physical-World Search Engine
Kanta Kaneda
Shunya Nagashima
Ryosuke Korekata
Motonari Kambara
Komei Sugiura
35
6
0
26 Dec 2023
LaDI-VTON: Latent Diffusion Textual-Inversion Enhanced Virtual Try-On
Davide Morelli
Alberto Baldrati
Giuseppe Cartella
Marcella Cornia
Marco Bertini
Rita Cucchiara
DiffM
55
100
0
22 May 2023
High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions
Sangyun Lee
Gyojung Gu
S. Park
Seunghwan Choi
Jaegul Choo
DiffM
66
128
0
28 Jun 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
390
4,125
0
28 Jan 2022
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,918
0
31 Dec 2020
1