4
0

LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures

Abstract

As large language models (LLMs) continue to evolve, it is critical to assess the security threats and vulnerabilities that may arise both during their training phase and after models have been deployed. This survey seeks to define and categorize the various attacks targeting LLMs, distinguishing between those that occur during the training phase and those that affect already trained models. A thorough analysis of these attacks is presented, alongside an exploration of defense mechanisms designed to mitigate such threats. Defenses are classified into two primary categories: prevention-based and detection-based defenses. Furthermore, our survey summarizes possible attacks and their corresponding defense strategies. It also provides an evaluation of the effectiveness of the known defense mechanisms for the different security threats. Our survey aims to offer a structured framework for securing LLMs, while also identifying areas that require further research to improve and strengthen defenses against emerging security challenges.

View on arXiv
@article{aguilera-martínez2025_2505.01177,
  title={ LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures },
  author={ Francisco Aguilera-Martínez and Fernando Berzal },
  journal={arXiv preprint arXiv:2505.01177},
  year={ 2025 }
}
Comments on this paper