ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Communities
  3. ...

Neighbor communities

0 / 0 papers shown
Title
Top Contributors
Name# Papers# Citations
Social Events
DateLocationEvent
  1. Home
  2. Communities
  3. PILM

Privacy Issues in Language Models

PILM
More data

Focuses on studies that thoroughly address the identification, implications, and mitigation of privacy concerns in language models, constituting a main focus of the paper.

Neighbor communities

51015

Featured Papers

0 / 0 papers shown
Title

All papers

50 / 248 papers shown
Title
SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
SALT: Steering Activations towards Leakage-free Thinking in Chain of Thought
Shourya Batra
Pierce Tillman
Samarth Gaggar
Shashank Kesineni
Kevin Zhu
Sunishchal Dev
Ashwinee Panda
Vasu Sharma
Maheep Chaudhary
KELMPILMLLMSVLRMELM
125
0
0
11 Nov 2025
Whisper Leak: a side-channel attack on Large Language Models
Whisper Leak: a side-channel attack on Large Language Models
Geoff McDonald
Jonathan Bar Or
AAMLPILM
253
0
0
05 Nov 2025
A Survey on Unlearning in Large Language Models
A Survey on Unlearning in Large Language Models
Ruichen Qiu
Jiajun Tan
Jiayue Pu
Honglin Wang
Xiao-Shan Gao
Fei Sun
MUAILawPILM
181
0
0
29 Oct 2025
When Intelligence Fails: An Empirical Study on Why LLMs Struggle with Password Cracking
When Intelligence Fails: An Empirical Study on Why LLMs Struggle with Password Cracking
M. Rehman
Syed Imad Ali Shah
A. Anwar
Noor Islam
PILM
169
0
0
18 Oct 2025
The Model's Language Matters: A Comparative Privacy Analysis of LLMs
The Model's Language Matters: A Comparative Privacy Analysis of LLMs
Abhishek K. Mishra
Antoine Boutet
Lucas Magnana
PILM
76
0
0
09 Oct 2025
Position: Privacy Is Not Just Memorization!
Position: Privacy Is Not Just Memorization!
Niloofar Mireshghallah
Tianshi Li
PILM
117
1
0
02 Oct 2025
Defeating Cerberus: Concept-Guided Privacy-Leakage Mitigation in Multimodal Language Models
Defeating Cerberus: Concept-Guided Privacy-Leakage Mitigation in Multimodal Language Models
Boyang Zhang
Istemi Ekin Akkus
Ruichuan Chen
Alice Dethise
Klaus Satzke
Ivica Rimac
Yang Zhang
PILM
74
0
0
29 Sep 2025
Beyond Data Privacy: New Privacy Risks for Large Language Models
Beyond Data Privacy: New Privacy Risks for Large Language Models
Yuntao Du
Zitao Li
Ninghui Li
Bolin Ding
PILMELM
130
0
0
16 Sep 2025
Beyond PII: How Users Attempt to Estimate and Mitigate Implicit LLM Inference
Beyond PII: How Users Attempt to Estimate and Mitigate Implicit LLM Inference
Synthia Wang
Sai Teja Peddinti
Nina Taft
Nick Feamster
SILMPILM
53
1
0
15 Sep 2025
Safety and Security Analysis of Large Language Models: Benchmarking Risk Profile and Harm Potential
Safety and Security Analysis of Large Language Models: Benchmarking Risk Profile and Harm Potential
Charankumar Akiri
Harrison Simpson
Kshitiz Aryal
Aarav Khanna
Maanak Gupta
PILMELM
99
0
0
12 Sep 2025
A Survey: Towards Privacy and Security in Mobile Large Language Models
A Survey: Towards Privacy and Security in Mobile Large Language Models
Honghui Xu
Kaiyang Li
Wei Chen
Danyang Zheng
Zhiyuan Li
Zhipeng Cai
PILM
80
0
0
02 Sep 2025
Safe-LLaVA: A Privacy-Preserving Vision-Language Dataset and Benchmark for Biometric Safety
Safe-LLaVA: A Privacy-Preserving Vision-Language Dataset and Benchmark for Biometric Safety
Younggun Kim
S. Swetha
Fazil Kagdi
Mubarak Shah
PILM
146
3
0
29 Aug 2025
Data Leakage in Visual Datasets
Data Leakage in Visual Datasets
Patrick Ramos
Ryan Ramos
Noa Garcia
PILM
92
0
0
24 Aug 2025
Should LLMs be WEIRD? Exploring WEIRDness and Human Rights in Large Language Models
Should LLMs be WEIRD? Exploring WEIRDness and Human Rights in Large Language Models
Ke Zhou
Marios Constantinides
Daniele Quercia
PILM
78
0
0
22 Aug 2025
A Study of Privacy-preserving Language Modeling Approaches
A Study of Privacy-preserving Language Modeling Approaches
Pritilata Saha
Abhirup Sinha
PILM
93
0
0
21 Aug 2025
Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
Badrinath Ramakrishnan
Akshaya Balaji
MUPILM
141
1
0
10 Aug 2025
Privacy Risk Predictions Based on Fundamental Understanding of Personal Data and an Evolving Threat Landscape
Privacy Risk Predictions Based on Fundamental Understanding of Personal Data and an Evolving Threat Landscape
Haoran Niu
K. Suzanne Barber
PILM
67
1
0
06 Aug 2025
A Survey on Data Security in Large Language Models
A Survey on Data Security in Large Language Models
Kang Chen
Xiuze Zhou
Y. Lin
Jinhe Su
Yuanhui Yu
Li Shen
F. Lin
PILMELM
89
0
1
04 Aug 2025
LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models
LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models
Delong Ran
Xinlei He
Tianshuo Cong
Anyu Wang
Cunliang Kong
Xiaoyun Wang
MIALMPILM
141
0
1
24 Jul 2025
Risk In Context: Benchmarking Privacy Leakage of Foundation Models in Synthetic Tabular Data Generation
Risk In Context: Benchmarking Privacy Leakage of Foundation Models in Synthetic Tabular Data Generation
Jessup Byun
Xiaofeng Lin
Joshua Ward
Guang Cheng
PILM
53
0
0
22 Jul 2025
Rethinking Memorization Measures and their Implications in Large Language Models
Rethinking Memorization Measures and their Implications in Large Language Models
Bishwamittra Ghosh
Soumi Das
Qinyuan Wu
Mohammad Aflah Khan
Krishna P. Gummadi
Evimaria Terzi
Deepak Garg
PILM
79
0
0
20 Jul 2025
The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation
The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation
Alexander Xiong
Xuandong Zhao
Aneesh Pappu
Dawn Song
PILM
45
2
0
08 Jul 2025
Controlling What You Share: Assessing Language Model Adherence to Privacy Preferences
Controlling What You Share: Assessing Language Model Adherence to Privacy Preferences
Guillem Ramírez
Alexandra Birch
Ivan Titov
PILM
115
0
0
07 Jul 2025
Model Inversion Attacks on Llama 3: Extracting PII from Large Language Models
Model Inversion Attacks on Llama 3: Extracting PII from Large Language Models
Sathesh P.Sivashanmugam
PILM
48
0
0
06 Jul 2025
A Survey on Model Extraction Attacks and Defenses for Large Language Models
A Survey on Model Extraction Attacks and Defenses for Large Language Models
Kaixiang Zhao
Lincan Li
Kaize Ding
Neil Zhenqiang Gong
Yue Zhao
Yushun Dong
PILMELM
194
2
0
26 Jun 2025
PrivacyXray: Detecting Privacy Breaches in LLMs through Semantic Consistency and Probability Certainty
PrivacyXray: Detecting Privacy Breaches in LLMs through Semantic Consistency and Probability Certainty
Jinwen He
Yiyang Lu
Zijin Lin
Kai Chen
Yue Zhao
PILM
66
0
0
24 Jun 2025
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
Leaky Thoughts: Large Reasoning Models Are Not Private Thinkers
Tommaso Green
Martin Gubri
Haritz Puerto
Sangdoo Yun
Seong Joon Oh
MIACVPILMELMLRM
740
7
2
18 Jun 2025
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and Mitigation
SoK: The Privacy Paradox of Large Language Models: Advancements, Privacy Risks, and MitigationACM Asia Conference on Computer and Communications Security (AsiaCCS), 2025
Yashothara Shanmugarasa
Ming Ding
M. Chamikara
Thierry Rakotoarivelo
PILMAILaw
281
4
0
15 Jun 2025
LLMs on support of privacy and security of mobile apps: state of the art and research directions
LLMs on support of privacy and security of mobile apps: state of the art and research directions
Tran Thanh Lam Nguyen
B. Carminati
E. Ferrari
PILM
138
0
0
13 Jun 2025
Memorization in Language Models through the Lens of Intrinsic Dimension
Memorization in Language Models through the Lens of Intrinsic Dimension
Stefan Arnold
PILM
190
0
0
11 Jun 2025
Private Memorization Editing: Turning Memorization into a Defense to Strengthen Data Privacy in Large Language Models
Private Memorization Editing: Turning Memorization into a Defense to Strengthen Data Privacy in Large Language ModelsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Elena Sofia Ruzzetti
Giancarlo A. Xompero
Davide Venditti
Fabio Massimo Zanzotto
KELMPILM
145
1
0
09 Jun 2025
PrivTru: A Privacy-by-Design Data Trustee Minimizing Information Leakage
PrivTru: A Privacy-by-Design Data Trustee Minimizing Information LeakageIFIP International Information Security Conference (IFIP SEC), 2025
Lukas Gehring
Florian Tschorsch
PILM
130
0
0
06 Jun 2025
A Systematic Review of Poisoning Attacks Against Large Language Models
A Systematic Review of Poisoning Attacks Against Large Language Models
Neil Fendley
Edward W. Staley
Joshua Carney
William Redman
Marie Chau
Nathan G. Drenkow
AAMLPILM
87
4
0
06 Jun 2025
Privacy and Security Threat for OpenAI GPTs
Privacy and Security Threat for OpenAI GPTs
Wei Wenying
Zhao Kaifa
Xue Lei
Fan Ming
SILMPILMELM
131
0
0
04 Jun 2025
Synthetic Iris Image Databases and Identity Leakage: Risks and Mitigation Strategies
Synthetic Iris Image Databases and Identity Leakage: Risks and Mitigation Strategies
Ada Sawilska
Mateusz Trokielewicz
PILM
138
1
0
03 Jun 2025
Self-Refining Language Model Anonymizers via Adversarial Distillation
Self-Refining Language Model Anonymizers via Adversarial Distillation
Kyuyoung Kim
Hyunjun Jeon
Jinwoo Shin
PILM
184
1
0
02 Jun 2025
The Inverse Scaling Effect of Pre-Trained Language Model Surprisal Is Not Due to Data Leakage
The Inverse Scaling Effect of Pre-Trained Language Model Surprisal Is Not Due to Data LeakageAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Byung-Doh Oh
Hongao Zhu
William Schuler
PILM
118
0
0
01 Jun 2025
Understanding and Mitigating Cross-lingual Privacy Leakage via Language-specific and Universal Privacy Neurons
Understanding and Mitigating Cross-lingual Privacy Leakage via Language-specific and Universal Privacy Neurons
Wenshuo Dong
Qingsong Yang
Shu Yang
Lijie Hu
Meng Ding
Wanyu Lin
Tianhang Zheng
Di Wang
PILM
66
2
0
01 Jun 2025
TrojanStego: Your Language Model Can Secretly Be A Steganographic Privacy Leaking Agent
TrojanStego: Your Language Model Can Secretly Be A Steganographic Privacy Leaking Agent
Dominik Meier
Jan Philip Wahle
Paul Röttger
Terry Ruas
Bela Gipp
PILM
156
0
0
26 May 2025
The Eye of Sherlock Holmes: Uncovering User Private Attribute Profiling via Vision-Language Model Agentic Framework
The Eye of Sherlock Holmes: Uncovering User Private Attribute Profiling via Vision-Language Model Agentic Framework
Feiran Liu
Y. Zhang
Xinyi Huang
Yinan Peng
Xinfeng Li
...
Ranjie Duan
Simeng Qin
Yang Liu
Qingsong Wen
Wei Dong
PILM
123
4
0
25 May 2025
Security Concerns for Large Language Models: A Survey
Security Concerns for Large Language Models: A Survey
Miles Q. Li
Benjamin C. M. Fung
PILMELM
389
9
0
24 May 2025
Understanding the Relationship Between Personal Data Privacy Literacy and Data Privacy Information Sharing by University Students
Understanding the Relationship Between Personal Data Privacy Literacy and Data Privacy Information Sharing by University Students
Brady D. Lund
Bryan Anderson
Ana Roeschley
Gahangir Hossain
PILM
67
0
0
24 May 2025
Can Large Language Models Really Recognize Your Name?
Can Large Language Models Really Recognize Your Name?
Dzung Pham
Peter Kairouz
Niloofar Mireshghallah
Eugene Bagdasarian
Chau Minh Pham
Amir Houmansadr
PILM
186
4
0
20 May 2025
A Systematic Review and Taxonomy for Privacy Breach Classification: Trends, Gaps, and Future Directions
A Systematic Review and Taxonomy for Privacy Breach Classification: Trends, Gaps, and Future Directions
Clint Fuchs
John D. Hastings
PILM
63
0
0
19 May 2025
Dark LLMs: The Growing Threat of Unaligned AI Models
Dark LLMs: The Growing Threat of Unaligned AI Models
Michael Fire
Yitzhak Elbazis
Adi Wasenstein
Lior Rokach
PILM
119
0
0
15 May 2025
PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context Optimization
PIG: Privacy Jailbreak Attack on LLMs via Gradient-based Iterative In-Context OptimizationAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Yidan Wang
Yanan Cao
Yubing Ren
Fang Fang
Zheng Lin
Binxing Fang
PILM
263
5
0
15 May 2025
User Behavior Analysis in Privacy Protection with Large Language Models: A Study on Privacy Preferences with Limited Data
User Behavior Analysis in Privacy Protection with Large Language Models: A Study on Privacy Preferences with Limited Data
Haowei Yang
Qingyi Lu
Yang Wang
Sibei Liu
Jiayun Zheng
Ao Xiang
PILM
216
8
0
08 May 2025
A Survey on Privacy Risks and Protection in Large Language Models
A Survey on Privacy Risks and Protection in Large Language ModelsJournal of King Saud University: Computer and Information Sciences (J. King Saud Univ. Comput. Inf. Sci.), 2025
Kang Chen
Xiuze Zhou
Yuanguo Lin
Shibo Feng
Li Shen
Pengcheng Wu
AILawPILM
768
10
0
04 May 2025
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
Francisco Aguilera-Martínez
Fernando Berzal
PILM
250
6
0
02 May 2025
Value Portrait: Assessing Language Models' Values through Psychometrically and Ecologically Valid Items
Value Portrait: Assessing Language Models' Values through Psychometrically and Ecologically Valid ItemsAnnual Meeting of the Association for Computational Linguistics (ACL), 2025
Jongwook Han
Dongmin Choi
Woojung Song
Eun-Ju Lee
Yohan Jo
PILM
256
4
0
02 May 2025
Loading #Papers per Month with "PILM"
Past speakers
Name (-)
Top Contributors
Name (-)
Top Organizations at ResearchTrend.AI
Name (-)
Social Events
DateLocationEvent
No social events available