ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.09040
  4. Cited By
AttnGCG: Enhancing Jailbreaking Attacks on LLMs with Attention
  Manipulation

AttnGCG: Enhancing Jailbreaking Attacks on LLMs with Attention Manipulation

11 October 2024
Zijun Wang
Haoqin Tu
J. Mei
Bingchen Zhao
Y. Wang
Cihang Xie
ArXivPDFHTML

Papers citing "AttnGCG: Enhancing Jailbreaking Attacks on LLMs with Attention Manipulation"

1 / 1 papers shown
Title
Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
Using Mechanistic Interpretability to Craft Adversarial Attacks against Large Language Models
Thomas Winninger
Boussad Addad
Katarzyna Kapusta
AAML
61
0
0
08 Mar 2025
1