MGC: A Compiler Framework Exploiting Compositional Blindness in Aligned LLMs for Malware Generation

Large language models (LLMs) have democratized software development, reducing the expertise barrier for programming complex applications. This accessibility extends to malicious software development, raising significant security concerns. While LLM providers have implemented alignment mechanisms to prevent direct generation of overtly malicious code, these safeguards predominantly evaluate individual prompts in isolation, overlooking a critical vulnerability: malicious operations can be systematically decomposed into benign-appearing sub-tasks. In this paper, we introduce the Malware Generation Compiler (MGC), a novel framework that leverages this vulnerability through modular decomposition and alignment-evasive generation. MGC employs a specialized Malware Description Intermediate Representation (MDIR) to bridge high-level malicious intents and benign-appearing code snippets. Extensive evaluation demonstrates that our attack reliably generates functional malware across diverse task specifications and categories, outperforming jailbreaking methods by +365.79% and underground services by +78.07% in correctness on three benchmark datasets. Case studies further show that MGC can reproduce and even enhance 16 real-world malware samples. This work provides critical insights for security researchers by exposing the risks of compositional attacks against aligned AI systems. Demonstrations are available atthis https URL.
View on arXiv@article{yan2025_2507.02057, title={ MGC: A Compiler Framework Exploiting Compositional Blindness in Aligned LLMs for Malware Generation }, author={ Lu Yan and Zhuo Zhang and Xiangzhe Xu and Shengwei An and Guangyu Shen and Zhou Xuan and Xuan Chen and Xiangyu Zhang }, journal={arXiv preprint arXiv:2507.02057}, year={ 2025 } }