ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.15012
  4. Cited By
Extracting Prompts by Inverting LLM Outputs

Extracting Prompts by Inverting LLM Outputs

23 May 2024
Collin Zhang
John X. Morris
Vitaly Shmatikov
ArXiv (abs)PDFHTML

Papers citing "Extracting Prompts by Inverting LLM Outputs"

21 / 21 papers shown
When AI Meets the Web: Prompt Injection Risks in Third-Party AI Chatbot Plugins
When AI Meets the Web: Prompt Injection Risks in Third-Party AI Chatbot Plugins
Yigitcan Kaya
Anton Landerer
Stijn Pletinckx
Michelle Zimmermann
Christopher Kruegel
Giovanni Vigna
SILM
576
0
0
08 Nov 2025
Diffusion LLMs are Natural Adversaries for any LLM
Diffusion LLMs are Natural Adversaries for any LLM
David Lüdke
Tom Wollschlager
Paul Ungermann
Stephan Günnemann
Leo Schwinn
DiffM
197
0
0
31 Oct 2025
Language Models are Injective and Hence Invertible
Language Models are Injective and Hence Invertible
Giorgos Nikolaou
Tommaso Mencattini
Donato Crisostomi
Andrea Santilli
Yannis Panagakis
Emanuele Rodolà
169
5
0
17 Oct 2025
A global log for medical AI
A global log for medical AI
Ayush Noori
Adam Rodman
Alan Karthikesalingam
Bilal A. Mateen
Christopher A. Longhurst
...
Noa Dagan
David Clifton
Ran D. Balicer
I. Kohane
Marinka Zitnik
171
0
0
05 Oct 2025
Depth Gives a False Sense of Privacy: LLM Internal States Inversion
Depth Gives a False Sense of Privacy: LLM Internal States Inversion
Tian Dong
Yan Meng
Shaofeng Li
Guoxing Chen
Zhen Liu
Haojin Zhu
AAML
161
2
0
22 Jul 2025
Privacy Risks of LLM-Empowered Recommender Systems: An Inversion Attack Perspective
Privacy Risks of LLM-Empowered Recommender Systems: An Inversion Attack PerspectiveACM Conference on Recommender Systems (RecSys), 2025
Yubo Wang
Min Tang
Nuo Shen
Shujie Cui
Weiqing Wang
128
1
0
20 Jul 2025
DP-Fusion: Token-Level Differentially Private Inference for Large Language Models
DP-Fusion: Token-Level Differentially Private Inference for Large Language Models
Rushil Thareja
Preslav Nakov
Praneeth Vepakomma
Nils Lukas
216
0
0
06 Jul 2025
Better Language Model Inversion by Compactly Representing Next-Token Distributions
Better Language Model Inversion by Compactly Representing Next-Token Distributions
Murtaza Nazir
Matthew Finlayson
John X. Morris
Xiang Ren
Swabha Swayamdipta
265
4
0
20 Jun 2025
Discrete Diffusion in Large Language and Multimodal Models: A Survey
Discrete Diffusion in Large Language and Multimodal Models: A Survey
Runpeng Yu
Qi Li
Xinchao Wang
DiffMAI4CE
519
24
0
16 Jun 2025
Federated In-Context Learning: Iterative Refinement for Improved Answer Quality
Federated In-Context Learning: Iterative Refinement for Improved Answer Quality
Ruhan Wang
Zhiyong Wang
Chengkai Huang
Rui Wang
Tong Yu
Lina Yao
John C. S. Lui
Dongruo Zhou
214
2
0
09 Jun 2025
Harnessing the Universal Geometry of Embeddings
Harnessing the Universal Geometry of Embeddings
Rishi Jha
Collin Zhang
Vitaly Shmatikov
John X. Morris
479
28
0
18 May 2025
Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG Evaluation Prompts
Beyond One-Size-Fits-All: Inversion Learning for Highly Effective NLG Evaluation Prompts
Hanhua Hong
Chenghao Xiao
Yang Wang
Y. Liu
Wenge Rong
Chenghua Lin
291
3
0
29 Apr 2025
Universal Zero-shot Embedding Inversion
Universal Zero-shot Embedding Inversion
Collin Zhang
John X. Morris
Vitaly Shmatikov
687
3
0
31 Mar 2025
Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices
Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices
Liang Luo
Bowei Tian
Sihan Chen
Yu Li
Zheyu Shen
Myungjin Lee
Ang Li
239
2
0
19 Mar 2025
GraphEval: A Lightweight Graph-Based LLM Framework for Idea Evaluation
GraphEval: A Lightweight Graph-Based LLM Framework for Idea EvaluationInternational Conference on Learning Representations (ICLR), 2025
Tao Feng
Yihang Sun
Jiaxuan You
407
17
0
16 Mar 2025
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Prompt Inversion Attack against Collaborative Inference of Large Language ModelsIEEE Symposium on Security and Privacy (S&P), 2025
Wenjie Qu
Yuguang Zhou
Yongji Wu
Tingsong Xiao
Binhang Yuan
Yongbin Li
Jiaheng Zhang
448
8
0
12 Mar 2025
Prompt Inference Attack on Distributed Large Language Model Inference Frameworks
Xinjian Luo
Ting Yu
X. Xiao
AAMLSILM
431
5
0
12 Mar 2025
IPAD: Inverse Prompt for AI Detection - A Robust and Interpretable LLM-Generated Text Detector
IPAD: Inverse Prompt for AI Detection - A Robust and Interpretable LLM-Generated Text Detector
Zheng Chen
Yushi Feng
Changyang He
Yue Deng
Hongxi Pu
Yue Liu
Haoxuan Li
Bo Li
DeLMO
258
1
0
21 Feb 2025
Label Anything: An Interpretable, High-Fidelity and Prompt-Free Annotator
Label Anything: An Interpretable, High-Fidelity and Prompt-Free AnnotatorIEEE International Conference on Robotics and Automation (ICRA), 2025
Wei-Bin Kou
Guangxu Zhu
Rongguang Ye
Shuai Wang
Ming Tang
Yik-Chung Wu
229
3
0
05 Feb 2025
PromptKeeper: Safeguarding System Prompts for LLMs
PromptKeeper: Safeguarding System Prompts for LLMs
Zhifeng Jiang
Zhihua Jin
Guoliang He
AAMLSILM
474
2
0
18 Dec 2024
Privacy in Large Language Models: Attacks, Defenses and Future
  Directions
Privacy in Large Language Models: Attacks, Defenses and Future Directions
Haoran Li
Yulin Chen
Jinglong Luo
Weijing Chen
Xiaojin Zhang
Qi Hu
Chunkit Chan
Yangqiu Song
PILM
432
68
0
16 Oct 2023
1