ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09747
12
0

Healthy Distrust in AI systems

14 May 2025
Benjamin Paaßen
Suzana Alpsancar
Tobias Matzner
Ingrid Scharlau
ArXivPDFHTML
Abstract

Under the slogan of trustworthy AI, much of contemporary AI research is focused on designing AI systems and usage practices that inspire human trust and, thus, enhance adoption of AI systems. However, a person affected by an AI system may not be convinced by AI system design alone -- neither should they, if the AI system is embedded in a social context that gives good reason to believe that it is used in tension with a person's interest. In such cases, distrust in the system may be justified and necessary to build meaningful trust in the first place. We propose the term "healthy distrust" to describe such a justified, careful stance towards certain AI usage practices. We investigate prior notions of trust and distrust in computer science, sociology, history, psychology, and philosophy, outline a remaining gap that healthy distrust might fill and conceptualize healthy distrust as a crucial part for AI usage that respects human autonomy.

View on arXiv
@article{paaßen2025_2505.09747,
  title={ Healthy Distrust in AI systems },
  author={ Benjamin Paaßen and Suzana Alpsancar and Tobias Matzner and Ingrid Scharlau },
  journal={arXiv preprint arXiv:2505.09747},
  year={ 2025 }
}
Comments on this paper