Improving Unsupervised Task-driven Models of Ventral Visual Stream via Relative Position Predictivity

Based on the concept that ventral visual stream (VVS) mainly functions for object recognition, current unsupervised task-driven methods model VVS by contrastive learning, and have achieved good brain similarity. However, we believe functions of VVS extend beyond just object recognition. In this paper, we introduce an additional function involving VVS, named relative position (RP) prediction. We first theoretically explain contrastive learning may be unable to yield the model capability of RP prediction. Motivated by this, we subsequently integrate RP learning with contrastive learning, and propose a new unsupervised task-driven method to model VVS, which is more inline with biological reality. We conduct extensive experiments, demonstrating that: (i) our method significantly improves downstream performance of object recognition while enhancing RP predictivity; (ii) RP predictivity generally improves the model brain similarity. Our results provide strong evidence for the involvement of VVS in location perception (especially RP prediction) from a computational perspective.
View on arXiv@article{rong2025_2505.08316, title={ Improving Unsupervised Task-driven Models of Ventral Visual Stream via Relative Position Predictivity }, author={ Dazhong Rong and Hao Dong and Xing Gao and Jiyu Wei and Di Hong and Yaoyao Hao and Qinming He and Yueming Wang }, journal={arXiv preprint arXiv:2505.08316}, year={ 2025 } }