ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.13647
  4. Cited By
Language Model Inversion

Language Model Inversion

22 November 2023
John X. Morris
Wenting Zhao
Justin T. Chiu
Vitaly Shmatikov
Alexander M. Rush
ArXivPDFHTML

Papers citing "Language Model Inversion"

11 / 11 papers shown
Title
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
Francisco Aguilera-Martínez
Fernando Berzal
PILM
48
0
0
02 May 2025
StyleRec: A Benchmark Dataset for Prompt Recovery in Writing Style Transformation
StyleRec: A Benchmark Dataset for Prompt Recovery in Writing Style Transformation
Shenyang Liu
Yang Gao
Shaoyan Zhai
Liqiang Wang
29
0
0
06 Apr 2025
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Prompt Inversion Attack against Collaborative Inference of Large Language Models
Wenjie Qu
Yuguang Zhou
Yongji Wu
Tingsong Xiao
Binhang Yuan
Y. Li
Jiaheng Zhang
66
0
0
12 Mar 2025
Single-pass Detection of Jailbreaking Input in Large Language Models
Single-pass Detection of Jailbreaking Input in Large Language Models
Leyla Naz Candogan
Yongtao Wu
Elias Abad Rocamora
Grigorios G. Chrysos
V. Cevher
AAML
45
0
0
24 Feb 2025
Has My System Prompt Been Used? Large Language Model Prompt Membership Inference
Has My System Prompt Been Used? Large Language Model Prompt Membership Inference
Roman Levin
Valeriia Cherepanova
Abhimanyu Hans
Avi Schwarzschild
Tom Goldstein
70
1
0
14 Feb 2025
Safeguarding System Prompts for LLMs
Safeguarding System Prompts for LLMs
Zhifeng Jiang
Zhihua Jin
Guoliang He
AAML
SILM
103
1
0
10 Jan 2025
On the Privacy Risk of In-context Learning
On the Privacy Risk of In-context Learning
Haonan Duan
Adam Dziedzic
Mohammad Yaghini
Nicolas Papernot
Franziska Boenisch
SILM
PILM
61
35
0
15 Nov 2024
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
Jiahao Yu
Yangguang Shao
Hanwen Miao
Junzheng Shi
SILM
AAML
64
4
0
23 Sep 2024
Sentence Embedding Leaks More Information than You Expect: Generative
  Embedding Inversion Attack to Recover the Whole Sentence
Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence
Haoran Li
Mingshi Xu
Yangqiu Song
77
43
0
04 May 2023
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale
  Instructions
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
127
115
0
27 Apr 2023
Text Revealer: Private Text Reconstruction via Model Inversion Attacks
  against Transformers
Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers
Ruisi Zhang
Seira Hidano
F. Koushanfar
SILM
65
26
0
21 Sep 2022
1