133

Trust Me, I Know This Function: Hijacking LLM Static Analysis using Bias

Main:13 Pages
6 Figures
Bibliography:2 Pages
Appendix:3 Pages
Abstract

Large Language Models (LLMs) are increasingly trusted to perform automated code review and static analysis at scale, supporting tasks such as vulnerability detection, summarization, and refactoring. In this paper, we identify and exploit a critical vulnerability in LLM-based code analysis: an abstraction bias that causes models to overgeneralize familiar programming patterns and overlook small, meaningful bugs. Adversaries can exploit this blind spot to hijack the control flow of the LLM's interpretation with minimal edits and without affecting actual runtime behavior. We refer to this attack as a Familiar Pattern Attack (FPA).

View on arXiv
Comments on this paper