ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.13732
25
7

Realizable HHH-Consistent and Bayes-Consistent Loss Functions for Learning to Defer

18 July 2024
Anqi Mao
M. Mohri
Yutao Zhong
ArXivPDFHTML
Abstract

We present a comprehensive study of surrogate loss functions for learning to defer. We introduce a broad family of surrogate losses, parameterized by a non-increasing function Ψ\PsiΨ, and establish their realizable HHH-consistency under mild conditions. For cost functions based on classification error, we further show that these losses admit HHH-consistency bounds when the hypothesis set is symmetric and complete, a property satisfied by common neural network and linear function hypothesis sets. Our results also resolve an open question raised in previous work (Mozannar et al., 2023) by proving the realizable HHH-consistency and Bayes-consistency of a specific surrogate loss. Furthermore, we identify choices of Ψ\PsiΨ that lead to HHH-consistent surrogate losses for any general cost function, thus achieving Bayes-consistency, realizable HHH-consistency, and HHH-consistency bounds simultaneously. We also investigate the relationship between HHH-consistency bounds and realizable HHH-consistency in learning to defer, highlighting key differences from standard classification. Finally, we empirically evaluate our proposed surrogate losses and compare them with existing baselines.

View on arXiv
Comments on this paper