Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.06514
Cited By
Actionable Recourse in Linear Classification
18 September 2018
Berk Ustun
Alexander Spangher
Yang Liu
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Actionable Recourse in Linear Classification"
50 / 151 papers shown
Title
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
33
68
0
01 Jun 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
A. Madry
FAtt
22
88
0
11 May 2021
Optimal Counterfactual Explanations for Scorecard modelling
Guillermo Navas-Palencia
11
9
0
17 Apr 2021
Consequence-aware Sequential Counterfactual Generation
Philip Naumann
Eirini Ntoutsi
OffRL
17
24
0
12 Apr 2021
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice
David S. Watson
Limor Gultchin
Ankur Taly
Luciano Floridi
22
63
0
27 Mar 2021
Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals
Sainyam Galhotra
Romila Pradhan
Babak Salimi
CML
19
105
0
22 Mar 2021
Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties
Lisa Schut
Oscar Key
R. McGrath
Luca Costabello
Bogdan Sacaleanu
Medb Corcoran
Y. Gal
CML
26
47
0
16 Mar 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
32
29
0
10 Mar 2021
Strategic Classification Made Practical
Sagi Levanon
Nir Rosenfeld
42
55
0
02 Mar 2021
Contrastive Explanations for Model Interpretability
Alon Jacovi
Swabha Swayamdipta
Shauli Ravfogel
Yanai Elazar
Yejin Choi
Yoav Goldberg
44
95
0
02 Mar 2021
Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms
Miguel Á. Carreira-Perpiñán
Suryabhan Singh Hada
CML
AAML
18
33
0
01 Mar 2021
Information Discrepancy in Strategic Learning
Yahav Bechavod
Chara Podimata
Zhiwei Steven Wu
Juba Ziani
41
45
0
01 Mar 2021
Towards Robust and Reliable Algorithmic Recourse
Sohini Upadhyay
Shalmali Joshi
Himabindu Lakkaraju
25
108
0
26 Feb 2021
If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques
Mark T. Keane
Eoin M. Kenny
Eoin Delaney
Barry Smyth
CML
27
146
0
26 Feb 2021
Towards a Unified Framework for Fair and Stable Graph Representation Learning
Chirag Agarwal
Himabindu Lakkaraju
Marinka Zitnik
27
157
0
25 Feb 2021
Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Harini Suresh
Steven R. Gomez
K. Nam
Arvind Satyanarayan
34
126
0
24 Jan 2021
GeCo: Quality Counterfactual Explanations in Real Time
Maximilian Schleich
Zixuan Geng
Yihong Zhang
D. Suciu
46
61
0
05 Jan 2021
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Jinyin Chen
Zhen Wang
Haibin Zheng
Jun Xiao
Zhaoyan Ming
AAML
24
5
0
18 Dec 2020
Declarative Approaches to Counterfactual Explanations for Classification
Leopoldo Bertossi
39
17
0
15 Nov 2020
Towards Unifying Feature Attribution and Counterfactual Explanations: Different Means to the Same End
R. Mothilal
Divyat Mahajan
Chenhao Tan
Amit Sharma
FAtt
CML
27
100
0
10 Nov 2020
Linear Classifiers that Encourage Constructive Adaptation
Yatong Chen
Jialu Wang
Yang Liu
39
16
0
31 Oct 2020
Incorporating Interpretable Output Constraints in Bayesian Neural Networks
Wanqian Yang
Lars Lorch
Moritz Graule
Himabindu Lakkaraju
Finale Doshi-Velez
UQCV
BDL
17
16
0
21 Oct 2020
Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
Sahil Verma
Varich Boonsanong
Minh Hoang
Keegan E. Hines
John P. Dickerson
Chirag Shah
CML
26
164
0
20 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
25
396
0
19 Oct 2020
Not All Datasets Are Born Equal: On Heterogeneous Data and Adversarial Examples
Yael Mathov
Eden Levy
Ziv Katzir
A. Shabtai
Yuval Elovici
AAML
33
14
0
07 Oct 2020
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
Kaivalya Rawal
Himabindu Lakkaraju
27
11
0
15 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
46
62
0
11 Sep 2020
Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy
Tom Farrand
Fatemehsadat Mireshghallah
Sahib Singh
Andrew Trask
FedML
11
88
0
10 Sep 2020
Model extraction from counterfactual explanations
Ulrich Aïvodji
Alexandre Bolot
Sébastien Gambs
MIACV
MLAU
33
51
0
03 Sep 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
29
162
0
11 Aug 2020
On Counterfactual Explanations under Predictive Multiplicity
Martin Pawelczyk
Klaus Broelemann
Gjergji Kasneci
25
85
0
23 Jun 2020
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Javier Antorán
Umang Bhatt
T. Adel
Adrian Weller
José Miguel Hernández-Lobato
UQCV
BDL
47
112
0
11 Jun 2020
Algorithmic recourse under imperfect causal knowledge: a probabilistic approach
Amir-Hossein Karimi
Julius von Kügelgen
Bernhard Schölkopf
Isabel Valera
CML
28
178
0
11 Jun 2020
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
Wojciech Samek
G. Montavon
Sebastian Lapuschkin
Christopher J. Anders
K. Müller
XAI
51
82
0
17 Mar 2020
ViCE: Visual Counterfactual Explanations for Machine Learning Models
Oscar Gomez
Steffen Holter
Jun Yuan
E. Bertini
AAML
59
93
0
05 Mar 2020
The Problem with Metrics is a Fundamental Problem for AI
Rachel L. Thomas
D. Uminsky
19
67
0
20 Feb 2020
Algorithmic Recourse: from Counterfactual Explanations to Interventions
Amir-Hossein Karimi
Bernhard Schölkopf
Isabel Valera
CML
24
337
0
14 Feb 2020
Decisions, Counterfactual Explanations and Strategic Behavior
Stratis Tsirtsis
Manuel Gomez Rodriguez
27
58
0
11 Feb 2020
Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
Divyat Mahajan
Chenhao Tan
Amit Sharma
OOD
CML
28
206
0
06 Dec 2019
Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Martin Pawelczyk
Johannes Haug
Klaus Broelemann
Gjergji Kasneci
OOD
CML
33
199
0
21 Oct 2019
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
11
365
0
20 Sep 2019
Predictive Multiplicity in Classification
Charles Marx
Flavio du Pin Calmon
Berk Ustun
36
136
0
14 Sep 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
32
67
0
16 Jul 2019
FlipTest: Fairness Testing via Optimal Transport
Emily Black
Samuel Yeom
Matt Fredrikson
30
94
0
21 Jun 2019
Model-Agnostic Counterfactual Explanations for Consequential Decisions
Amir-Hossein Karimi
Gilles Barthe
Borja Balle
Isabel Valera
44
318
0
27 May 2019
Optimal Decision Making Under Strategic Behavior
Stratis Tsirtsis
Behzad Tabibian
M. Khajehnejad
Adish Singla
Bernhard Schölkopf
Manuel Gomez Rodriguez
18
31
0
22 May 2019
Interpreting Neural Networks Using Flip Points
Roozbeh Yousefzadeh
D. O’Leary
AAML
FAtt
22
17
0
21 Mar 2019
Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
Hao Wang
Berk Ustun
Flavio du Pin Calmon
FaML
28
83
0
29 Jan 2019
Efficient Search for Diverse Coherent Explanations
Chris Russell
17
234
0
02 Jan 2019
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Rachel K. E. Bellamy
Kuntal Dey
Michael Hind
Samuel C. Hoffman
Stephanie Houde
...
Diptikalyan Saha
P. Sattigeri
Moninder Singh
Kush R. Varshney
Yunfeng Zhang
FaML
SyDa
62
796
0
03 Oct 2018
Previous
1
2
3
4
Next