4

Counterfactual Scenarios for Automated Planning

Main:9 Pages
1 Figures
Bibliography:2 Pages
1 Tables
Abstract

Counterfactual Explanations (CEs) are a powerful technique used to explain Machine Learning models by showing how the input to a model should be minimally changed for the model to produce a different output. Similar proposals have been made in the context of Automated Planning, where CEs have been characterised in terms of minimal modifications to an existing plan that would result in the satisfaction of a different goal. While such explanations may help diagnose faults and reason about the characteristics of a plan, they fail to capture higher-level properties of the problem being solved. To address this limitation, we propose a novel explanation paradigm that is based on counterfactual scenarios. In particular, given a planning problem PP and an \ltlf formula ψ\psi defining desired properties of a plan, counterfactual scenarios identify minimal modifications to PP such that it admits plans that comply with ψ\psi. In this paper, we present two qualitative instantiations of counterfactual scenarios based on an explicit quantification over plans that must satisfy ψ\psi. We then characterise the computational complexity of generating such counterfactual scenarios when different types of changes are allowed on PP. We show that producing counterfactual scenarios is often only as expensive as computing a plan for PP, thus demonstrating the practical viability of our proposal and ultimately providing a framework to construct practical algorithms in this area.

View on arXiv
Comments on this paper