1

Causal Front-Door Adjustment for Robust Jailbreak Attacks on LLMs

Yao Zhou
Zeen Song
Wenwen Qiang
Fengge Wu
Shuyi Zhou
Changwen Zheng
Hui Xiong
Main:8 Pages
5 Figures
Bibliography:3 Pages
5 Tables
Appendix:7 Pages
Abstract

Safety alignment mechanisms in Large Language Models (LLMs) often operate as latent internal states, obscuring the model's inherent capabilities. Building on this observation, we model the safety mechanism as an unobserved confounder from a causal perspective. Then, we propose the \textbf{C}ausal \textbf{F}ront-Door \textbf{A}djustment \textbf{A}ttack ({\textbf{CFA}}2^2) to jailbreak LLM, which is a framework that leverages Pearl's Front-Door Criterion to sever the confounding associations for robust jailbreaking. Specifically, we employ Sparse Autoencoders (SAEs) to physically strip defense-related features, isolating the core task intent. We further reduce computationally expensive marginalization to a deterministic intervention with low inference complexity. Experiments demonstrate that {CFA}2^2 achieves state-of-the-art attack success rates while offering a mechanistic interpretation of the jailbreaking process.

View on arXiv
Comments on this paper