ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.05720
  4. Cited By
Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer
  Inputs of Language Models in Federated Learning

Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning

10 December 2023
Jianwei Li
Sheng Liu
Qi Lei
    PILM
    SILM
    AAML
ArXivPDFHTML

Papers citing "Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning"

4 / 4 papers shown
Title
DAGER: Exact Gradient Inversion for Large Language Models
DAGER: Exact Gradient Inversion for Large Language Models
Ivo Petrov
Dimitar I. Dimitrov
Maximilian Baader
Mark Niklas Muller
Martin Vechev
FedML
55
3
0
24 May 2024
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
  for Language Models
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Liam H. Fowl
Jonas Geiping
Steven Reich
Yuxin Wen
Wojtek Czaja
Micah Goldblum
Tom Goldstein
FedML
71
56
0
29 Jan 2022
When the Curious Abandon Honesty: Federated Learning Is Not Private
When the Curious Abandon Honesty: Federated Learning Is Not Private
Franziska Boenisch
Adam Dziedzic
R. Schuster
Ali Shahin Shamsabadi
Ilia Shumailov
Nicolas Papernot
FedML
AAML
69
181
0
06 Dec 2021
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
297
6,956
0
20 Apr 2018
1