Towards Vision-Language-Garment Models for Web Knowledge Garment Understanding and Generation
- VLM
Main:3 Pages
6 Figures
Bibliography:1 Pages
1 Tables
Abstract
Multimodal foundation models have demonstrated strong generalization, yet their ability to transfer knowledge to specialized domains such as garment generation remains underexplored. We introduce VLG, a vision-language-garment model that synthesizes garments from textual descriptions and visual imagery. Our experiments assess VLG's zero-shot generalization, investigating its ability to transfer web-scale reasoning to unseen garment styles and prompts. Preliminary results indicate promising transfer capabilities, highlighting the potential for multimodal foundation models to adapt effectively to specialized domains like fashion design.
View on arXivComments on this paper
