ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.11168
  4. Cited By
Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails

Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails

15 April 2025
William Hackett
Lewis Birch
Stefan Trawicki
N. Suri
Peter Garraghan
ArXivPDFHTML

Papers citing "Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails"

1 / 1 papers shown
Title
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Chetan Pathade
AAML
SILM
46
0
0
07 May 2025
1