ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.09154
34
48

Attacking Large Language Models with Projected Gradient Descent

14 February 2024
Simon Geisler
Tom Wollschlager
M. H. I. Abdalla
Johannes Gasteiger
Stephan Günnemann
    AAML
    SILM
ArXivPDFHTML
Abstract

Current LLM alignment methods are readily broken through specifically crafted adversarial prompts. While crafting adversarial prompts using discrete optimization is highly effective, such attacks typically use more than 100,000 LLM calls. This high computational cost makes them unsuitable for, e.g., quantitative analyses and adversarial training. To remedy this, we revisit Projected Gradient Descent (PGD) on the continuously relaxed input prompt. Although previous attempts with ordinary gradient-based attacks largely failed, we show that carefully controlling the error introduced by the continuous relaxation tremendously boosts their efficacy. Our PGD for LLMs is up to one order of magnitude faster than state-of-the-art discrete optimization to achieve the same devastating attack results.

View on arXiv
@article{geisler2025_2402.09154,
  title={ Attacking Large Language Models with Projected Gradient Descent },
  author={ Simon Geisler and Tom Wollschläger and M. H. I. Abdalla and Johannes Gasteiger and Stephan Günnemann },
  journal={arXiv preprint arXiv:2402.09154},
  year={ 2025 }
}
Comments on this paper