26
0

Learning Visual-Semantic Subspace Representations

Abstract

Learning image representations that capture rich semantic relationships remains a significant challenge. Existing approaches are either contrastive, lacking robust theoretical guarantees, or struggle to effectively represent the partial orders inherent to structured visual-semantic data. In this paper, we introduce a nuclear norm-based loss function, grounded in the same information theoretic principles that have proved effective in self-supervised learning. We present a theoretical characterization of this loss, demonstrating that, in addition to promoting class orthogonality, it encodes the spectral geometry of the data within a subspace lattice. This geometric representation allows us to associate logical propositions with subspaces, ensuring that our learned representations adhere to a predefined symbolic structure.

View on arXiv
@article{moreira2025_2405.16213,
  title={ Learning Visual-Semantic Subspace Representations },
  author={ Gabriel Moreira and Manuel Marques and João Paulo Costeira and Alexander Hauptmann },
  journal={arXiv preprint arXiv:2405.16213},
  year={ 2025 }
}
Comments on this paper