21
0

Exploring Information-Theoretic Metrics Associated with Neural Collapse in Supervised Training

Abstract

In this paper, we introduce matrix entropy as an analytical tool for studying supervised learning, investigating the information content of data representations and classification head vectors, as well as the dynamic interactions between them during the supervised learning process. Our experimental results reveal that matrix entropy effectively captures the variations in information content of data representations and classification head vectors as neural networks approach Neural Collapse during supervised training, while also serving as a robust metric for measuring similarity among data samples. Leveraging this property, we propose Cross-Model Alignment (CMA) loss to optimize the fine-tuning of pretrained models. To characterize the dynamics of neural networks nearing the Neural Collapse state, we introduce two novel metrics: the Matrix Mutual Information Ratio (MIR) and the Matrix Entropy Difference Ratio (HDR), which quantitatively assess the interactions between data representations and classification heads in supervised learning, with theoretical optimal values derived under the Neural Collapse state. Our experiments demonstrate that MIR and HDR effectively explain various phenomena in neural networks, including the dynamics of standard supervised training, linear mode connectivity. Moreover, we use MIR and HDR to analyze the dynamics of grokking, which is a fascinating phenomenon in supervised learning where a model unexpectedly exhibits generalization long after achieving training data fit.

View on arXiv
@article{song2025_2409.16767,
  title={ Exploring Information-Theoretic Metrics Associated with Neural Collapse in Supervised Training },
  author={ Kun Song and Zhiquan Tan and Bochao Zou and Jiansheng Chen and Huimin Ma and Weiran Huang },
  journal={arXiv preprint arXiv:2409.16767},
  year={ 2025 }
}
Comments on this paper