15
0

Benchmarking Misuse Mitigation Against Covert Adversaries

Main:8 Pages
8 Figures
Bibliography:4 Pages
3 Tables
Appendix:12 Pages
Abstract

Existing language model safety evaluations focus on overt attacks and low-stakes tasks. Realistic attackers can subvert current safeguards by requesting help on small, benign-seeming tasks across many independent queries. Because individual queries do not appear harmful, the attack is hard to {detect}. However, when combined, these fragments uplift misuse by helping the attacker complete hard and dangerous tasks. Toward identifying defenses against such strategies, we develop Benchmarks for Stateful Defenses (BSD), a data generation pipeline that automates evaluations of covert attacks and corresponding defenses. Using this pipeline, we curate two new datasets that are consistently refused by frontier models and are too difficult for weaker open-weight models. Our evaluations indicate that decomposition attacks are effective misuse enablers, and highlight stateful defenses as a countermeasure.

View on arXiv
@article{brown2025_2506.06414,
  title={ Benchmarking Misuse Mitigation Against Covert Adversaries },
  author={ Davis Brown and Mahdi Sabbaghi and Luze Sun and Alexander Robey and George J. Pappas and Eric Wong and Hamed Hassani },
  journal={arXiv preprint arXiv:2506.06414},
  year={ 2025 }
}
Comments on this paper