ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2202.08827
  4. Cited By
LAMP: Extracting Text from Gradients with Language Model Priors

LAMP: Extracting Text from Gradients with Language Model Priors

17 February 2022
Mislav Balunović
Dimitar I. Dimitrov
Nikola Jovanović
Martin Vechev
ArXivPDFHTML

Papers citing "LAMP: Extracting Text from Gradients with Language Model Priors"

11 / 11 papers shown
Title
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
Francisco Aguilera-Martínez
Fernando Berzal
PILM
52
0
0
02 May 2025
ReCIT: Reconstructing Full Private Data from Gradient in Parameter-Efficient Fine-Tuning of Large Language Models
ReCIT: Reconstructing Full Private Data from Gradient in Parameter-Efficient Fine-Tuning of Large Language Models
Jin Xie
Ruishi He
Songze Li
Xiaojun Jia
Shouling Ji
SILM
AAML
66
0
0
29 Apr 2025
Privacy-Preserving Data Deduplication for Enhancing Federated Learning
  of Language Models
Privacy-Preserving Data Deduplication for Enhancing Federated Learning of Language Models
Aydin Abadi
Vishnu Asutosh Dasu
Sumanta Sarkar
38
3
0
11 Jul 2024
DAGER: Exact Gradient Inversion for Large Language Models
DAGER: Exact Gradient Inversion for Large Language Models
Ivo Petrov
Dimitar I. Dimitrov
Maximilian Baader
Mark Niklas Muller
Martin Vechev
FedML
55
3
0
24 May 2024
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated
  AI-enabled Critical Infrastructure
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
35
1
0
24 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
25
16
0
02 Feb 2024
A Survey of What to Share in Federated Learning: Perspectives on Model
  Utility, Privacy Leakage, and Communication Efficiency
A Survey of What to Share in Federated Learning: Perspectives on Model Utility, Privacy Leakage, and Communication Efficiency
Jiawei Shao
Zijian Li
Wenqiang Sun
Tailin Zhou
Yuchang Sun
Lumin Liu
Zehong Lin
Yuyi Mao
Jun Zhang
FedML
37
23
0
20 Jul 2023
Recovering Private Text in Federated Learning of Language Models
Recovering Private Text in Federated Learning of Language Models
Samyak Gupta
Yangsibo Huang
Zexuan Zhong
Tianyu Gao
Kai Li
Danqi Chen
FedML
25
74
0
17 May 2022
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
  for Language Models
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
71
56
0
29 Jan 2022
Gradient-based Adversarial Attacks against Text Transformers
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
98
227
0
15 Apr 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
1