ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.10207
37
1

Measuring algorithmic interpretability: A human-learning-based framework and the corresponding cognitive complexity score

20 May 2022
John P. Lalor
Hong Guo
ArXiv (abs)PDFHTML
Abstract

Algorithmic interpretability is necessary to build trust, ensure fairness, and track accountability. However, there is no existing formal measurement method for algorithmic interpretability. In this work, we build upon programming language theory and cognitive load theory to develop a framework for measuring algorithmic interpretability. The proposed measurement framework reflects the process of a human learning an algorithm. We show that the measurement framework and the resulting cognitive complexity score have the following desirable properties - universality, computability, uniqueness, and monotonicity. We illustrate the measurement framework through a toy example, describe the framework and its conceptual underpinnings, and demonstrate the benefits of the framework, in particular for managers considering tradeoffs when selecting algorithms.

View on arXiv
Comments on this paper