38
0

Do you see what I see? An Ambiguous Optical Illusion Dataset exposing limitations of Explainable AI

Abstract

From uncertainty quantification to real-world object detection, we recognize the importance of machine learning algorithms, particularly in safety-critical domains such as autonomous driving or medical diagnostics. In machine learning, ambiguous data plays an important role in various machine learning domains. Optical illusions present a compelling area of study in this context, as they offer insight into the limitations of both human and machine perception. Despite this relevance, optical illusion datasets remain scarce. In this work, we introduce a novel dataset of optical illusions featuring intermingled animal pairs designed to evoke perceptual ambiguity. We identify generalizable visual concepts, particularly gaze direction and eye cues, as subtle yet impactful features that significantly influence model accuracy. By confronting models with perceptual ambiguity, our findings underscore the importance of concepts in visual learning and provide a foundation for studying bias and alignment between human and machine vision. To make this dataset useful for general purposes, we generate optical illusions systematically with different concepts discussed in our bias mitigation section. The dataset is accessible in Kaggle viathis https URL. Our source code can be found atthis https URL.

View on arXiv
@article{newen2025_2505.21589,
  title={ Do you see what I see? An Ambiguous Optical Illusion Dataset exposing limitations of Explainable AI },
  author={ Carina Newen and Luca Hinkamp and Maria Ntonti and Emmanuel Müller },
  journal={arXiv preprint arXiv:2505.21589},
  year={ 2025 }
}
Comments on this paper