ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.17819
23
2

Automatically Adaptive Conformal Risk Control

25 June 2024
Vincent Blot
Anastasios Nikolas Angelopoulos
Michael I Jordan
Nicolas Brunel
    AI4CE
ArXivPDFHTML
Abstract

Science and technology have a growing need for effective mechanisms that ensure reliable, controlled performance from black-box machine learning algorithms. These performance guarantees should ideally hold conditionally on the input-that is the performance guarantees should hold, at least approximately, no matter what the input. However, beyond stylized discrete groupings such as ethnicity and gender, the right notion of conditioning can be difficult to define. For example, in problems such as image segmentation, we want the uncertainty to reflect the intrinsic difficulty of the test sample, but this may be difficult to capture via a conditioning event. Building on the recent work of Gibbs et al. [2023], we propose a methodology for achieving approximate conditional control of statistical risks-the expected value of loss functions-by adapting to the difficulty of test samples. Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning. We apply this framework to various regression and segmentation tasks, enabling finer-grained control over model performance and demonstrating that by continuously monitoring and adjusting these parameters, we can achieve superior precision compared to conventional risk-control methods.

View on arXiv
@article{blot2025_2406.17819,
  title={ Automatically Adaptive Conformal Risk Control },
  author={ Vincent Blot and Anastasios N Angelopoulos and Michael I Jordan and Nicolas J-B Brunel },
  journal={arXiv preprint arXiv:2406.17819},
  year={ 2025 }
}
Comments on this paper