AICD Bench: A Challenging Benchmark for AI-Generated Code Detection
- AAMLELM
Large language models (LLMs) are increasingly capable of generating functional source code, raising concerns about authorship, accountability, and security. While detecting AI-generated code is critical, existing datasets and benchmarks are narrow, typically limited to binary human-machine classification under in-distribution settings. To bridge this gap, we introduce , the most comprehensive benchmark for AI-generated code detection. It spans , across , and , including recent reasoning models. Beyond scale, AICD Bench introduces three realistic detection tasks: ()~ under distribution shifts in language and domain, ()~, grouping generators by architectural lineage, and ()~ across human, machine, hybrid, and adversarial code. Extensive evaluation on neural and classical detectors shows that performance remains far below practical usability, particularly under distribution shift and for hybrid or adversarial code. We release AICD Bench as a to drive the next generation of robust approaches for AI-generated code detection. The data and the code are available atthis https URL}.
View on arXiv