14
0

Fine Grain Classification: Connecting Meta using Cross-Contrastive pre-training

Abstract

Fine-grained visual classification aims to recognize objects belonging to multiple subordinate categories within a super-category. However, this remains a challenging problem, as appearance information alone is often insufficient to accurately differentiate between fine-grained visual categories. To address this, we propose a novel and unified framework that leverages meta-information to assist fine-grained identification. We tackle the joint learning of visual and meta-information through cross-contrastive pre-training. In the first stage, we employ three encoders for images, text, and meta-information, aligning their projected embeddings to achieve better representations. We then fine-tune the image and meta-information encoders for the classification task. Experiments on the NABirds dataset demonstrate that our framework effectively utilizes meta-information to enhance fine-grained recognition performance. With the addition of meta-information, our framework surpasses the current baseline on NABirds by 7.83%. Furthermore, it achieves an accuracy of 84.44% on the NABirds dataset, outperforming many existing state-of-the-art approaches that utilize meta-information.

View on arXiv
@article{mamtani2025_2504.20322,
  title={ Fine Grain Classification: Connecting Meta using Cross-Contrastive pre-training },
  author={ Sumit Mamtani and Yash Thesia },
  journal={arXiv preprint arXiv:2504.20322},
  year={ 2025 }
}
Comments on this paper