ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.01087
47
0

Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time

3 March 2025
J. Donnelly
Zhicheng Guo
A. Barnett
Hayden McTavish
Chaofan Chen
Cynthia Rudin
ArXivPDFHTML
Abstract

Interpretability is critical for machine learning models in high-stakes settings because it allows users to verify the model's reasoning. In computer vision, prototypical part models (ProtoPNets) have become the dominant model type to meet this need. Users can easily identify flaws in ProtoPNets, but fixing problems in a ProtoPNet requires slow, difficult retraining that is not guaranteed to resolve the issue. This problem is called the "interaction bottleneck." We solve the interaction bottleneck for ProtoPNets by simultaneously finding many equally good ProtoPNets (i.e., a draw from a "Rashomon set"). We show that our framework - called Proto-RSet - quickly produces many accurate, diverse ProtoPNets, allowing users to correct problems in real time while maintaining performance guarantees with respect to the training set. We demonstrate the utility of this method in two settings: 1) removing synthetic bias introduced to a bird identification model and 2) debugging a skin cancer identification model. This tool empowers non-machine-learning experts, such as clinicians or domain experts, to quickly refine and correct machine learning models without repeated retraining by machine learning experts.

View on arXiv
@article{donnelly2025_2503.01087,
  title={ Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time },
  author={ Jon Donnelly and Zhicheng Guo and Alina Jade Barnett and Hayden McTavish and Chaofan Chen and Cynthia Rudin },
  journal={arXiv preprint arXiv:2503.01087},
  year={ 2025 }
}
Comments on this paper