ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2001.02271
323
14
v1v2 (latest)

Revealing Neural Network Bias to Non-Experts Through Interactive Counterfactual Examples

7 January 2020
Chelsea M. Myers
Evan Freed
Luis Fernando Laris Pardo
Anushay Furqan
S. Risi
Jichen Zhu
    CML
ArXiv (abs)PDFHTML
Abstract

AI algorithms are not immune to biases. Traditionally, non-experts have little control in uncovering potential social bias (e.g., gender bias) in the algorithms that may impact their lives. We present a preliminary design for an interactive visualization tool CEB to reveal biases in a commonly used AI method, Neural Networks (NN). CEB combines counterfactual examples and abstraction of an NN decision process to empower non-experts to detect bias. This paper presents the design of CEB and initial findings of an expert panel (n=6) with AI, HCI, and Social science experts.

View on arXiv
Comments on this paper