ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06269
36
0

Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models

8 March 2025
Thomas Winninger
Boussad Addad
Katarzyna Kapusta
    AAML
ArXivPDFHTML
Abstract

Traditional white-box methods for creating adversarial perturbations against LLMs typically rely only on gradient computation from the targeted model, ignoring the internal mechanisms responsible for attack success or failure. Conversely, interpretability studies that analyze these internal mechanisms lack practical applications beyond runtime interventions. We bridge this gap by introducing a novel white-box approach that leverages mechanistic interpretability techniques to craft practical adversarial inputs. Specifically, we first identify acceptance subspaces - sets of feature vectors that do not trigger the model's refusal mechanisms - then use gradient-based optimization to reroute embeddings from refusal subspaces to acceptance subspaces, effectively achieving jailbreaks. This targeted approach significantly reduces computation cost, achieving attack success rates of 80-95\% on state-of-the-art models including Gemma2, Llama3.2, and Qwen2.5 within minutes or even seconds, compared to existing techniques that often fail or require hours of computation. We believe this approach opens a new direction for both attack research and defense development. Furthermore, it showcases a practical application of mechanistic interpretability where other methods are less efficient, which highlights its utility. The code and generated datasets are available atthis https URL.

View on arXiv
@article{winninger2025_2503.06269,
  title={ Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models },
  author={ Thomas Winninger and Boussad Addad and Katarzyna Kapusta },
  journal={arXiv preprint arXiv:2503.06269},
  year={ 2025 }
}
Comments on this paper