33
0

IllusionCAPTCHA: A CAPTCHA based on Visual Illusion

Abstract

CAPTCHAs have long been essential tools for protecting applications from automated bots. Initially designed as simple questions to distinguish humans from bots, they have become increasingly complex to keep pace with the proliferation of CAPTCHA-cracking techniques employed by malicious actors. However, with the advent of advanced large language models (LLMs), the effectiveness of existing CAPTCHAs is now being undermined.To address this issue, we have conducted an empirical study to evaluate the performance of multimodal LLMs in solving CAPTCHAs and to assess how many attempts human users typically need to pass them. Our findings reveal that while LLMs can solve most CAPTCHAs, they struggle with those requiring complex reasoning type of CAPTCHA that also presents significant challenges for human users. Interestingly, our user study shows that the majority of human participants require a second attempt to pass these reasoning CAPTCHAs, a finding not reported in previous research.Based on empirical findings, we present IllusionCAPTCHA, a novel security mechanism employing the "Human-Easy but AI-Hard" paradigm. This new CAPTCHA employs visual illusions to create tasks that are intuitive for humans but highly confusing for AI models. Furthermore, we developed a structured, step-by-step method that generates misleading options, which particularly guide LLMs towards making incorrect choices and reduce their chances of successfully solving CAPTCHAs. Our evaluation shows that IllusionCAPTCHA can effectively deceive LLMs 100% of the time. Moreover, our structured design significantly increases the likelihood of AI errors when attempting to solve these challenges. Results from our user study indicate that 86.95% of participants successfully passed the CAPTCHA on their first attempt, outperforming other CAPTCHA systems.

View on arXiv
@article{ding2025_2502.05461,
  title={ IllusionCAPTCHA: A CAPTCHA based on Visual Illusion },
  author={ Ziqi Ding and Gelei Deng and Yi Liu and Junchen Ding and Jieshan Chen and Yulei Sui and Yuekang Li },
  journal={arXiv preprint arXiv:2502.05461},
  year={ 2025 }
}
Comments on this paper