ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.04846
  4. Cited By
Amazing Things Come From Having Many Good Models

Amazing Things Come From Having Many Good Models

5 July 2024
Cynthia Rudin
Chudi Zhong
Lesia Semenova
Margo Seltzer
Ronald E. Parr
Jiachang Liu
Srikar Katta
Jon Donnelly
Harry Chen
Zachery Boner
ArXivPDFHTML

Papers citing "Amazing Things Come From Having Many Good Models"

14 / 14 papers shown
Title
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users
Julian Rosenberger
Philipp Schröppel
Sven Kruschel
Mathias Kraus
Patrick Zschech
Maximilian Förster
FAtt
19
0
0
11 May 2025
Unique Rashomon Sets for Robust Active Learning
Simon Nugyen
Kentaro Hoffman
Tyler H. McCormick
47
0
0
13 Mar 2025
Rashomon Sets for Prototypical-Part Networks: Editing Interpretable Models in Real-Time
J. Donnelly
Zhicheng Guo
A. Barnett
Hayden McTavish
Chaofan Chen
Cynthia Rudin
52
0
0
03 Mar 2025
All You Need for Counterfactual Explainability Is Principled and Reliable Estimate of Aleatoric and Epistemic Uncertainty
All You Need for Counterfactual Explainability Is Principled and Reliable Estimate of Aleatoric and Epistemic Uncertainty
Kacper Sokol
Eyke Hüllermeier
49
2
0
24 Feb 2025
Near Optimal Decision Trees in a SPLIT Second
Varun Babbar
Hayden McTavish
Cynthia Rudin
Margo Seltzer
31
0
0
21 Feb 2025
Rashomon perspective for measuring uncertainty in the survival predictive maintenance models
Yigitcan Yardimci
Mustafa Cavus
31
2
0
16 Feb 2025
The Curious Case of Arbitrariness in Machine Learning
Prakhar Ganesh
Afaf Taik
G. Farnadi
54
2
0
28 Jan 2025
EXAGREE: Towards Explanation Agreement in Explainable Machine Learning
EXAGREE: Towards Explanation Agreement in Explainable Machine Learning
Sichao Li
Quanling Deng
Amanda S. Barnard
25
0
0
04 Nov 2024
Perceptions of the Fairness Impacts of Multiplicity in Machine Learning
Perceptions of the Fairness Impacts of Multiplicity in Machine Learning
Anna P. Meyer
Yea-Seul Kim
Aws Albarghouthi
Loris DÁntoni
FaML
19
1
0
18 Sep 2024
Credibility-Aware Multi-Modal Fusion Using Probabilistic Circuits
Credibility-Aware Multi-Modal Fusion Using Probabilistic Circuits
Sahil Sidheekh
Pranuthi Tenali
Saurabh Mathur
Erik Blasch
Kristian Kersting
S. Natarajan
27
1
0
05 Mar 2024
Sparse and Faithful Explanations Without Sparse Models
Sparse and Faithful Explanations Without Sparse Models
Yiyang Sun
Zhi Chen
Vittorio Orlandi
Tong Wang
Cynthia Rudin
33
2
0
15 Feb 2024
Fast and Interpretable Mortality Risk Scores for Critical Care Patients
Fast and Interpretable Mortality Risk Scores for Critical Care Patients
Chloe Qinyu Zhu
Muhang Tian
Lesia Semenova
Jiachang Liu
Jack Xu
Joseph Scarpa
Cynthia Rudin
23
3
0
21 Nov 2023
Exploring the Whole Rashomon Set of Sparse Decision Trees
Exploring the Whole Rashomon Set of Sparse Decision Trees
Rui Xin
Chudi Zhong
Zhi Chen
Takuya Takagi
Margo Seltzer
Cynthia Rudin
25
53
0
16 Sep 2022
In Pursuit of Interpretable, Fair and Accurate Machine Learning for
  Criminal Recidivism Prediction
In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction
Caroline Linjun Wang
Bin Han
Bhrij Patel
Cynthia Rudin
FaML
HAI
54
83
0
08 May 2020
1