For deep regression, preserving the ordinality of the targets with respect to the feature representation improves performance across various tasks. However, a theoretical explanation for the benefits of ordinality is still lacking. This work reveals that preserving ordinality reduces the conditional entropy of representation conditional on the target . However, our findings reveal that typical regression losses do little to reduce , even though it is vital for generalization performance. With this motivation, we introduce an optimal transport-based regularizer to preserve the similarity relationships of targets in the feature space to reduce . Additionally, we introduce a simple yet efficient strategy of duplicating the regressor targets, also with the aim of reducing . Experiments on three real-world regression tasks verify the effectiveness of our strategies to improve deep regression. Code:this https URL.
View on arXiv@article{zhang2025_2502.09122, title={ Improving Deep Regression with Tightness }, author={ Shihao Zhang and Yuguang Yan and Angela Yao }, journal={arXiv preprint arXiv:2502.09122}, year={ 2025 } }