Theoretical, Measured and Subjective Responsibility in Aided Decision Making

AI and advanced automation are involved in almost all aspects of our life. In the interaction with such systems, human causal responsibility for the outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task, and were aided by binary automated classification systems with different capabilities. We compared the theoretical responsibility values, predicted by the ResQu model, to the actual measured responsibility participants took on and to their subjective ranking of responsibility. The ResQu model predictions were strongly correlated with both measured and subjective responsibility. The model generally provided quite accurate predictions of the actual values of the measured responsibility. A bias existed only when participants' classification capabilities were much worse than those of the automated classification system. In this case, the participants overestimated their own capabilities, relied less-than-optimally on the automated system and assumed greater-than-optimal responsibility. The results demonstrate the value of the ResQu model as a descriptive model, considering some systematic deviations. It is possible to compute a ResQu model score to predict behavior or perceptions of responsibility, taking into account the characteristics of the human, the intelligent system and the environment. The ResQu model provides a new quantitative method that may aid system design and guide policy and legal decisions, regarding human responsibility in events involving intelligent systems.
View on arXiv