7
0

Beginning with You: Perceptual-Initialization Improves Vision-Language Representation and Alignment

Abstract

We introduce Perceptual-Initialization (PI), a paradigm shift in visual representation learning that incorporates human perceptual structure during the initialization phase rather than as a downstream fine-tuning step. By integrating human-derived triplet embeddings from the NIGHTS dataset to initialize a CLIP vision encoder, followed by self-supervised learning on YFCC15M, our approach demonstrates significant zero-shot performance improvements, without any task-specific fine-tuning, across 29 zero shot classification and 2 retrieval benchmarks. On ImageNet-1K, zero-shot gains emerge after approximately 15 epochs of pretraining. Benefits are observed across datasets of various scales, with improvements manifesting at different stages of the pretraining process depending on dataset characteristics. Our approach consistently enhances zero-shot top-1 accuracy, top-5 accuracy, and retrieval recall (e.g., R@1, R@5) across these diverse evaluation tasks, without requiring any adaptation to target domains. These findings challenge the conventional wisdom of using human-perceptual data primarily for fine-tuning and demonstrate that embedding human perceptual structure during early representation learning yields more capable and vision-language aligned systems that generalize immediately to unseen tasks. Our work shows that "beginning with you", starting with human perception, provides a stronger foundation for general-purpose vision-language intelligence.

View on arXiv
@article{hu2025_2505.14204,
  title={ Beginning with You: Perceptual-Initialization Improves Vision-Language Representation and Alignment },
  author={ Yang Hu and Runchen Wang and Stephen Chong Zhao and Xuhui Zhan and Do Hun Kim and Mark Wallace and David A. Tovar },
  journal={arXiv preprint arXiv:2505.14204},
  year={ 2025 }
}
Comments on this paper