ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09921
  4. Cited By
PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization
v1v2 (latest)

PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization

Annual Meeting of the Association for Computational Linguistics (ACL), 2025
15 May 2025
Yidan Wang
Yanan Cao
Yubing Ren
Fang Fang
Zheng Lin
Binxing Fang
    PILM
ArXiv (abs)PDFHTMLGithub (11★)

Papers citing "PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization"

4 / 4 papers shown
VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy
VortexPIA: Indirect Prompt Injection Attack against LLMs for Efficient Extraction of User Privacy
Yu Cui
Sicheng Pan
Yifei Liu
Haibin Zhang
Cong Zuo
208
3
0
05 Oct 2025
Beyond Data Privacy: New Privacy Risks for Large Language Models
Beyond Data Privacy: New Privacy Risks for Large Language Models
Yuntao Du
Zitao Li
Ninghui Li
Bolin Ding
PILMELM
355
3
0
16 Sep 2025
DP-Fusion: Token-Level Differentially Private Inference for Large Language Models
DP-Fusion: Token-Level Differentially Private Inference for Large Language Models
Rushil Thareja
Preslav Nakov
Praneeth Vepakomma
Nils Lukas
299
0
0
06 Jul 2025
A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures
A Survey of LLM-Driven AI Agent Communication: Protocols, Security Risks, and Defense Countermeasures
Dezhang Kong
Shi Lin
Zhenhua Xu
Z. J. Wang
Minghao Li
...
Ningyu Zhang
Chaochao Chen
Chunming Wu
Muhammad Khurram Khan
Meng Han
LLMAG
485
51
0
24 Jun 2025
1
Page 1 of 1