ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.06823
  4. Cited By
PLeak: Prompt Leaking Attacks against Large Language Model Applications

PLeak: Prompt Leaking Attacks against Large Language Model Applications

10 May 2024
Bo Hui
Haolin Yuan
Neil Gong
Philippe Burlina
Yinzhi Cao
    LLMAG
    AAML
    SILM
ArXivPDFHTML

Papers citing "PLeak: Prompt Leaking Attacks against Large Language Model Applications"

13 / 13 papers shown
Title
LLMs' Suitability for Network Security: A Case Study of STRIDE Threat Modeling
LLMs' Suitability for Network Security: A Case Study of STRIDE Threat Modeling
AbdulAziz AbdulGhaffar
Ashraf Matrawy
26
0
0
07 May 2025
ASIDE: Architectural Separation of Instructions and Data in Language Models
ASIDE: Architectural Separation of Instructions and Data in Language Models
Egor Zverev
Evgenii Kortukov
Alexander Panfilov
Soroush Tabesh
Alexandra Volkova
Sebastian Lapuschkin
Wojciech Samek
Christoph H. Lampert
AAML
52
1
0
13 Mar 2025
Has My System Prompt Been Used? Large Language Model Prompt Membership Inference
Has My System Prompt Been Used? Large Language Model Prompt Membership Inference
Roman Levin
Valeriia Cherepanova
Abhimanyu Hans
Avi Schwarzschild
Tom Goldstein
70
1
0
14 Feb 2025
An Empirically-grounded tool for Automatic Prompt Linting and Repair: A Case Study on Bias, Vulnerability, and Optimization in Developer Prompts
Dhia Elhaq Rzig
Dhruba Jyoti Paul
Kaiser Pister
Jordan Henkel
Foyzul Hassan
75
0
0
21 Jan 2025
Reconstruction of Differentially Private Text Sanitization via Large Language Models
Reconstruction of Differentially Private Text Sanitization via Large Language Models
Shuchao Pang
Zhigang Lu
H. Wang
Peng Fu
Yongbin Zhou
Minhui Xue
AAML
51
4
0
16 Oct 2024
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
Linke Song
Zixuan Pang
Wenhao Wang
Zihao Wang
XiaoFeng Wang
Hongbo Chen
Wei Song
Yier Jin
Dan Meng
Rui Hou
43
7
0
30 Sep 2024
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
Jiahao Yu
Yangguang Shao
Hanwen Miao
Junzheng Shi
SILM
AAML
64
4
0
23 Sep 2024
Stealing the Decoding Algorithms of Language Models
Stealing the Decoding Algorithms of Language Models
A. Naseh
Kalpesh Krishna
Mohit Iyyer
Amir Houmansadr
MLAU
50
20
0
08 Mar 2023
Making Pre-trained Language Models Better Few-shot Learners
Making Pre-trained Language Models Better Few-shot Learners
Tianyu Gao
Adam Fisch
Danqi Chen
241
1,898
0
31 Dec 2020
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
267
1,798
0
14 Dec 2020
Stealing Links from Graph Neural Networks
Stealing Links from Graph Neural Networks
Xinlei He
Jinyuan Jia
Michael Backes
Neil Zhenqiang Gong
Yang Zhang
AAML
53
164
0
05 May 2020
How Can We Accelerate Progress Towards Human-like Linguistic
  Generalization?
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
Tal Linzen
210
188
0
03 May 2020
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
398
2,576
0
03 Sep 2019
1