26
0

Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment

Abstract

Real-world data distributions are often highly skewed. This has spurred a growing body of research on long-tailed recognition, aimed at addressing the imbalance in training classification models. Among the methods studied, multiplicative logit adjustment (MLA) stands out as a simple and effective method. What theoretical foundation explains the effectiveness of this heuristic method? We provide a justification for the effectiveness of MLA with the following two-step process. First, we develop a theory that adjusts optimal decision boundaries by estimating feature spread on the basis of neural collapse. Second, we demonstrate that MLA approximates this optimal method. Additionally, through experiments on long-tailed datasets, we illustrate the practical usefulness of MLA under more realistic conditions. We also offer experimental insights to guide the tuning of MLA hyperparameters.

View on arXiv
@article{hasegawa2025_2409.17582,
  title={ Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment },
  author={ Naoya Hasegawa and Issei Sato },
  journal={arXiv preprint arXiv:2409.17582},
  year={ 2025 }
}
Comments on this paper