ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07831
26
0

Deceptive Automated Interpretability: Language Models Coordinating to Fool Oversight Systems

10 April 2025
Simon Lermen
Mateusz Dziemian
Natalia Pérez-Campanero Antolín
ArXivPDFHTML
Abstract

We demonstrate how AI agents can coordinate to deceive oversight systems using automated interpretability of neural networks. Using sparse autoencoders (SAEs) as our experimental framework, we show that language models (Llama, DeepSeek R1, and Claude 3.7 Sonnet) can generate deceptive explanations that evade detection. Our agents employ steganographic methods to hide information in seemingly innocent explanations, successfully fooling oversight models while achieving explanation quality comparable to reference labels. We further find that models can scheme to develop deceptive strategies when they believe the detection of harmful features might lead to negative consequences for themselves. All tested LLM agents were capable of deceiving the overseer while achieving high interpretability scores comparable to those of reference labels. We conclude by proposing mitigation strategies, emphasizing the critical need for robust understanding and defenses against deception.

View on arXiv
@article{lermen2025_2504.07831,
  title={ Deceptive Automated Interpretability: Language Models Coordinating to Fool Oversight Systems },
  author={ Simon Lermen and Mateusz Dziemian and Natalia Pérez-Campanero Antolín },
  journal={arXiv preprint arXiv:2504.07831},
  year={ 2025 }
}
Comments on this paper