Explainable AI (XAI) aims to support appropriate human-AI reliance by increasing the interpretability of complex model decisions. Despite the proliferation of proposed methods, there is mixed evidence surrounding the effects of different styles of XAI explanations on human-AI reliance. Interpreting these conflicting findings requires an understanding of the individual and combined qualities of different explanation styles that influence appropriate and inappropriate human-AI reliance, and the role of interpretability in this interaction. In this study, we investigate the influences of feature-based, example-based, and combined feature- and example-based XAI methods on human-AI reliance through a two-part experimental study with 274 participants comparing these explanation style conditions. Our findings suggest differences between feature-based and example-based explanation styles beyond interpretability that affect human-AI reliance patterns across differences in individual performance and task complexity. Our work highlights the importance of adapting explanations to their specific users and context over maximising broad interpretability.
View on arXiv