ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.07326
87
2

AI Biases as Asymmetries: A Review to Guide Practice

Frontiers in Big Data (FBD), 2025
10 March 2025
Gabriella Waters
Phillip Honenberger
ArXiv (abs)PDFHTML
Abstract

The understanding of bias in AI is currently undergoing a revolution. Initially understood as errors or flaws, biases are increasingly recognized as integral to AI systems and sometimes preferable to less biased alternatives. In this paper, we review the reasons for this changed understanding and provide new guidance on two questions: First, how should we think about and measure biases in AI systems, consistent with the new understanding? Second, what kinds of bias in an AI system should we accept or even amplify, and what kinds should we minimize or eliminate, and why? The key to answering both questions, we argue, is to understand biases as "violations of a symmetry standard" (following Kelly). We distinguish three main types of asymmetry in AI systems-error biases, inequality biases, and process biases-and highlight places in the pipeline of AI development and application where bias of each type is likely to be good, bad, or inevitable.

View on arXiv
Main:24 Pages
3 Figures
4 Tables
Comments on this paper