A Tunable Loss Function for Robust Classification: Calibration, Landscape, and Generalization

We introduce a tunable loss function called -loss, parameterized by , which interpolates between the exponential loss (), the log-loss (), and the 0-1 loss (), for the machine learning setting of classification. Theoretically, we illustrate a fundamental connection between -loss and Arimoto conditional entropy, verify the classification-calibration of -loss in order to demonstrate asymptotic optimality via Rademacher complexity generalization techniques, and build-upon a notion called strictly local quasi-convexity in order to quantitatively characterize the optimization landscape of -loss. Practically, we perform class imbalance, robustness, and classification experiments on benchmark image datasets using convolutional-neural-networks. Our main practical conclusion is that certain tasks may benefit from tuning -loss away from log-loss (), and to this end we provide simple heuristics for the practitioner. In particular, navigating the hyperparameter can readily provide superior model robustness to label flips () and sensitivity to imbalanced classes ().
View on arXiv