ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.05061
  4. Cited By
A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered
  Applications are Vulnerable to PromptWares

A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares

9 August 2024
Stav Cohen
Ron Bitton
Ben Nassi
    SILM
ArXivPDFHTML

Papers citing "A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares"

4 / 4 papers shown
Title
Teams of LLM Agents can Exploit Zero-Day Vulnerabilities
Teams of LLM Agents can Exploit Zero-Day Vulnerabilities
Richard Fang
Antony Kellermann
Akul Gupta
Qiusi Zhan
Richard Fang
R. Bindu
Daniel Kang
LLMAG
33
27
0
02 Jun 2024
LLM Agents can Autonomously Exploit One-day Vulnerabilities
LLM Agents can Autonomously Exploit One-day Vulnerabilities
Richard Fang
R. Bindu
Akul Gupta
Daniel Kang
SILM
LLMAG
71
52
0
11 Apr 2024
What Was Your Prompt? A Remote Keylogging Attack on AI Assistants
What Was Your Prompt? A Remote Keylogging Attack on AI Assistants
Roy Weiss
Daniel Ayzenshteyn
Guy Amit
Yisroel Mirsky
46
12
0
14 Mar 2024
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM
  Agents Exponentially Fast
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Xiangming Gu
Xiaosen Zheng
Tianyu Pang
Chao Du
Qian Liu
Ye Wang
Jing Jiang
Min-Bin Lin
LLMAG
LM&Ro
35
47
0
13 Feb 2024
1