Communities
Connect sessions
AI calendar
Organizations
Join Slack
Contact Sales
Search
Open menu
Home
Papers
2505.09921
Cited By
v1
v2 (latest)
PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization
Annual Meeting of the Association for Computational Linguistics (ACL), 2025
15 May 2025
Yidan Wang
Yanan Cao
Yubing Ren
Fang Fang
Zheng Lin
Binxing Fang
PILM
Re-assign community
ArXiv (abs)
PDF
HTML
Github (11★)
Papers citing
"PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization"
4 / 4 papers shown
VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy
Yu Cui
Sicheng Pan
Yifei Liu
Haibin Zhang
Cong Zuo
208
3
0
05 Oct 2025
Beyond Data Privacy: New Privacy Risks for Large Language Models
Yuntao Du
Zitao Li
Ninghui Li
Bolin Ding
PILM
ELM
355
3
0
16 Sep 2025
DP-Fusion: Token-Level Differentially Private Inference for Large Language Models
Rushil Thareja
Preslav Nakov
Praneeth Vepakomma
Nils Lukas
299
0
0
06 Jul 2025
A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures
Dezhang Kong
Shi Lin
Zhenhua Xu
Z. J. Wang
Minghao Li
...
Ningyu Zhang
Chaochao Chen
Chunming Wu
Muhammad Khurram Khan
Meng Han
LLMAG
485
51
0
24 Jun 2025
1
Page 1 of 1