ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.12125
31
1

Hypernym Bias: Unraveling Deep Classifier Training Dynamics through the Lens of Class Hierarchy

17 February 2025
Roman Malashin
Valeria Yachnaya
Alexander Mullin
ArXivPDFHTML
Abstract

We investigate the training dynamics of deep classifiers by examining how hierarchical relationships between classes evolve during training. Through extensive experiments, we argue that the learning process in classification problems can be understood through the lens of label clustering. Specifically, we observe that networks tend to distinguish higher-level (hypernym) categories in the early stages of training, and learn more specific (hyponym) categories later. We introduce a novel framework to track the evolution of the feature manifold during training, revealing how the hierarchy of class relations emerges and refines across the network layers. Our analysis demonstrates that the learned representations closely align with the semantic structure of the dataset, providing a quantitative description of the clustering process. Notably, we show that in the hypernym label space, certain properties of neural collapse appear earlier than in the hyponym label space, helping to bridge the gap between the initial and terminal phases of learning. We believe our findings offer new insights into the mechanisms driving hierarchical learning in deep networks, paving the way for future advancements in understanding deep learning dynamics.

View on arXiv
@article{malashin2025_2502.12125,
  title={ Hypernym Bias: Unraveling Deep Classifier Training Dynamics through the Lens of Class Hierarchy },
  author={ Roman Malashin and Valeria Yachnaya and Alexander Mullin },
  journal={arXiv preprint arXiv:2502.12125},
  year={ 2025 }
}
Comments on this paper