ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.03343
  4. Cited By
What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks

What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks

2 November 2024
Nathalie Maria Kirch
Constantin Weisser
Severin Field
Helen Yannakoudakis
Stephen Casper
ArXivPDFHTML

Papers citing "What Features in Prompts Jailbreak LLMs? Investigating the Mechanisms Behind Attacks"

1 / 1 papers shown
Title
Preventing Jailbreak Prompts as Malicious Tools for Cybercriminals: A
  Cyber Defense Perspective
Preventing Jailbreak Prompts as Malicious Tools for Cybercriminals: A Cyber Defense Perspective
Jean Marie Tshimula
Xavier Ndona
D'Jeff K. Nkashama
Pierre Martin Tardif
F. Kabanza
Marc Frappier
Shengrui Wang
SILM
81
0
0
25 Nov 2024
1