Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2506.07596
Cited By
TwinBreak: Jailbreaking LLM Security Alignments based on Twin Prompts
9 June 2025
T. Krauß
Hamid Dashtbani
Alexandra Dmitrienko
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"TwinBreak: Jailbreaking LLM Security Alignments based on Twin Prompts"
2 / 2 papers shown
Can LLMs Threaten Human Survival? Benchmarking Potential Existential Threats from LLMs via Prefix Completion
Yu Cui
Yifei Liu
Hang Fu
Sicheng Pan
Haibin Zhang
Cong Zuo
Licheng Wang
182
0
0
24 Nov 2025
CompressionAttack: Exploiting Prompt Compression as a New Attack Surface in LLM-Powered Agents
Zesen Liu
Z. Zhang
Yuchong Xie
Dongdong She
AAML
295
0
0
27 Oct 2025
1