ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13118
110
0

Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals

19 May 2025
Marouane Il Idrissi
Agathe Fernandes Machado
Ewen Gallic
Arthur Charpentier
ArXivPDFHTML
Abstract

Cooperative game theory methods, notably Shapley values, have significantly enhanced machine learning (ML) interpretability. However, existing explainable AI (XAI) frameworks mainly attribute average model predictions, overlooking predictive uncertainty. This work addresses that gap by proposing a novel, model-agnostic uncertainty attribution (UA) method grounded in conformal prediction (CP). By defining cooperative games where CP interval properties-such as width and bounds-serve as value functions, we systematically attribute predictive uncertainty to input features. Extending beyond the traditional Shapley values, we use the richer class of Harsanyi allocations, and in particular the proportional Shapley values, which distribute attribution proportionally to feature importance. We propose a Monte Carlo approximation method with robust statistical guarantees to address computational feasibility, significantly improving runtime efficiency. Our comprehensive experiments on synthetic benchmarks and real-world datasets demonstrate the practical utility and interpretative depth of our approach. By combining cooperative game theory and conformal prediction, we offer a rigorous, flexible toolkit for understanding and communicating predictive uncertainty in high-stakes ML applications.

View on arXiv
@article{idrissi2025_2505.13118,
  title={ Unveil Sources of Uncertainty: Feature Contribution to Conformal Prediction Intervals },
  author={ Marouane Il Idrissi and Agathe Fernandes Machado and Ewen Gallic and Arthur Charpentier },
  journal={arXiv preprint arXiv:2505.13118},
  year={ 2025 }
}
Comments on this paper