User-Aware Algorithmic Recourse with Preference Elicitation

Counterfactual interventions are a powerful tool to explain the decisions of a black-box decision process and to enable algorithmic recourse. They are a sequence of actions that, if performed by a user, can overturn an unfavourable decision made by an automated decision system. However, most of the current methods provide interventions without considering the user's preferences. In this work, we propose a shift of paradigm by providing a novel formalization which considers the user as an active part of the process rather than a mere target. Following the preference elicitation setting, we introduce the first human-in-the-loop approach to perform algorithmic recourse. We also present a polynomial procedure to ask questions which maximize the Expected Utility of Selection (EUS), a measure of the utility of the choice set that accounts for the uncertainty with respect to both the model and the user response. We use it to iteratively refine our cost estimates in a Bayesian fashion. We integrate this preference elicitation strategy into a reinforcement learning agent coupled with Monte Carlo Tree Search for the efficient exploration, so as to provide personalized interventions achieving algorithmic recourse. An experimental evaluation of synthetic and real-world datasets shows that a handful of queries allows for achieving a substantial reduction in the cost of interventions with respect to user-independent alternatives.
View on arXiv