Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2408.05061
Cited By
A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares
9 August 2024
Stav Cohen
Ron Bitton
Ben Nassi
SILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares"
4 / 4 papers shown
Title
Teams of LLM Agents can Exploit Zero-Day Vulnerabilities
Richard Fang
Antony Kellermann
Akul Gupta
Qiusi Zhan
Richard Fang
R. Bindu
Daniel Kang
LLMAG
33
27
0
02 Jun 2024
LLM Agents can Autonomously Exploit One-day Vulnerabilities
Richard Fang
R. Bindu
Akul Gupta
Daniel Kang
SILM
LLMAG
71
52
0
11 Apr 2024
What Was Your Prompt? A Remote Keylogging Attack on AI Assistants
Roy Weiss
Daniel Ayzenshteyn
Guy Amit
Yisroel Mirsky
46
12
0
14 Mar 2024
Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast
Xiangming Gu
Xiaosen Zheng
Tianyu Pang
Chao Du
Qian Liu
Ye Wang
Jing Jiang
Min-Bin Lin
LLMAG
LM&Ro
35
47
0
13 Feb 2024
1