ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1807.07506
  4. Cited By
Improving Simple Models with Confidence Profiles
v1v2 (latest)

Improving Simple Models with Confidence Profiles

Neural Information Processing Systems (NeurIPS), 2018
19 July 2018
Amit Dhurandhar
Karthikeyan Shanmugam
Ronny Luss
Peder Olsen
ArXiv (abs)PDFHTML

Papers citing "Improving Simple Models with Confidence Profiles"

19 / 19 papers shown
Title
Gradient based Feature Attribution in Explainable AI: A Technical Review
Gradient based Feature Attribution in Explainable AI: A Technical Review
Yongjie Wang
Tong Zhang
Xu Guo
Zhiqi Shen
XAI
230
38
0
15 Mar 2024
Explainable AI for clinical risk prediction: a survey of concepts,
  methods, and modalities
Explainable AI for clinical risk prediction: a survey of concepts, methods, and modalities
Munib Mesinovic
Peter Watkinson
Ting Zhu
FaML
183
5
0
16 Aug 2023
On the Safety of Interpretable Machine Learning: A Maximum Deviation
  Approach
On the Safety of Interpretable Machine Learning: A Maximum Deviation ApproachNeural Information Processing Systems (NeurIPS), 2022
Dennis L. Wei
Rahul Nair
Amit Dhurandhar
Kush R. Varshney
Elizabeth M. Daly
Moninder Singh
FAtt
192
10
0
02 Nov 2022
Using Knowledge Distillation to improve interpretable models in a retail
  banking context
Using Knowledge Distillation to improve interpretable models in a retail banking context
Maxime Biehler
Mohamed Guermazi
Célim Starck
202
2
0
30 Sep 2022
PainPoints: A Framework for Language-based Detection of Chronic Pain and
  Expert-Collaborative Text-Summarization
PainPoints: A Framework for Language-based Detection of Chronic Pain and Expert-Collaborative Text-Summarization
S. Fadnavis
Amit Dhurandhar
R. Norel
Jenna M. Reinen
C. Agurto
E. Secchettin
V. Schweiger
Giovanni Perini
Guillermo Cecchi
154
2
0
14 Sep 2022
Analogies and Feature Attributions for Model Agnostic Explanation of
  Similarity Learners
Analogies and Feature Attributions for Model Agnostic Explanation of Similarity Learners
Karthikeyan N. Ramamurthy
Amit Dhurandhar
Dennis L. Wei
Zaid Bin Tariq
FAtt
177
3
0
02 Feb 2022
Auto-Transfer: Learning to Route Transferrable Representations
Auto-Transfer: Learning to Route Transferrable RepresentationsInternational Conference on Learning Representations (ICLR), 2022
K. Murugesan
Vijay Sadashivaiah
Ronny Luss
Karthikeyan Shanmugam
Pin-Yu Chen
Amit Dhurandhar
AAML
331
6
0
02 Feb 2022
Locally Invariant Explanations: Towards Stable and Unidirectional
  Explanations through Local Invariant Learning
Locally Invariant Explanations: Towards Stable and Unidirectional Explanations through Local Invariant LearningNeural Information Processing Systems (NeurIPS), 2022
Amit Dhurandhar
Karthikeyan N. Ramamurthy
Kartik Ahuja
Vijay Arya
FAtt
216
6
0
28 Jan 2022
Applications of Explainable AI for 6G: Technical Aspects, Use Cases, and
  Research Challenges
Applications of Explainable AI for 6G: Technical Aspects, Use Cases, and Research Challenges
Shen Wang
M. Qureshi
Luis Miralles-Pechuán
Thien Huynh-The
Thippa Reddy Gadekallu
Madhusanka Liyanage
216
32
0
09 Dec 2021
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Q. V. Liao
R. Varshney
454
280
0
20 Oct 2021
AI Explainability 360: Impact and Design
AI Explainability 360: Impact and DesignAAAI Conference on Artificial Intelligence (AAAI), 2021
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
155
20
0
24 Sep 2021
Multihop: Leveraging Complex Models to Learn Accurate Simple Models
Multihop: Leveraging Complex Models to Learn Accurate Simple Models
Amit Dhurandhar
Tejaswini Pedapati
159
0
0
14 Sep 2021
Perceptron Theory Can Predict the Accuracy of Neural Networks
Perceptron Theory Can Predict the Accuracy of Neural NetworksIEEE Transactions on Neural Networks and Learning Systems (IEEE TNNLS), 2020
Denis Kleyko
A. Rosato
E. P. Frady
Massimo Panella
Friedrich T. Sommer
GNN
172
11
0
14 Dec 2020
Unifying Model Explainability and Robustness via Machine-Checkable
  Concepts
Unifying Model Explainability and Robustness via Machine-Checkable Concepts
Vedant Nanda
Till Speicher
John P. Dickerson
Krishna P. Gummadi
Muhammad Bilal Zafar
AAML
163
4
0
01 Jul 2020
Learning Global Transparent Models Consistent with Local Contrastive
  Explanations
Learning Global Transparent Models Consistent with Local Contrastive Explanations
Tejaswini Pedapati
Avinash Balakrishnan
Karthikeyan Shanmugam
Amit Dhurandhar
FAtt
206
0
0
19 Feb 2020
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI
  Explainability Techniques
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
Vijay Arya
Rachel K. E. Bellamy
Pin-Yu Chen
Amit Dhurandhar
Michael Hind
...
Karthikeyan Shanmugam
Moninder Singh
Kush R. Varshney
Dennis L. Wei
Yunfeng Zhang
XAI
240
429
0
06 Sep 2019
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial
  Neural Networks
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural NetworksEuropean Conference on Artificial Intelligence (ECAI), 2019
R. Confalonieri
Tillman Weyde
Tarek R. Besold
Fermín Moscoso del Prado Martín
161
25
0
19 Jun 2019
Model Agnostic Contrastive Explanations for Structured Data
Model Agnostic Contrastive Explanations for Structured Data
Amit Dhurandhar
Tejaswini Pedapati
Avinash Balakrishnan
Pin-Yu Chen
Karthikeyan Shanmugam
Ruchi Puri
FAtt
245
90
0
31 May 2019
Enhancing Simple Models by Exploiting What They Already Know
Enhancing Simple Models by Exploiting What They Already Know
Amit Dhurandhar
Karthikeyan Shanmugam
Ronny Luss
179
2
0
30 May 2019
1