ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.13789
  4. Cited By
Deconstructing Classifiers: Towards A Data Reconstruction Attack Against
  Text Classification Models

Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models

23 June 2023
Adel M. Elmahdy
A. Salem
    SILM
ArXivPDFHTML

Papers citing "Deconstructing Classifiers: Towards A Data Reconstruction Attack Against Text Classification Models"

5 / 5 papers shown
Title
NLP Security and Ethics, in the Wild
NLP Security and Ethics, in the Wild
Heather Lent
Erick Galinkin
Yiyi Chen
Jens Myrup Pedersen
Leon Derczynski
Johannes Bjerva
SILM
42
0
0
09 Apr 2025
Reconstructing training data from document understanding models
Reconstructing training data from document understanding models
Jérémie Dentan
Arnaud Paran
A. Shabou
AAML
SyDa
38
1
0
05 Jun 2024
Privacy-preserving Fine-tuning of Large Language Models through Flatness
Privacy-preserving Fine-tuning of Large Language Models through Flatness
Tiejin Chen
Longchao Da
Huixue Zhou
Pingzhi Li
Kaixiong Zhou
Tianlong Chen
Hua Wei
29
5
0
07 Mar 2024
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
D. Song
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAU
SILM
281
1,812
0
14 Dec 2020
When is Memorization of Irrelevant Training Data Necessary for
  High-Accuracy Learning?
When is Memorization of Irrelevant Training Data Necessary for High-Accuracy Learning?
Gavin Brown
Mark Bun
Vitaly Feldman
Adam D. Smith
Kunal Talwar
245
80
0
11 Dec 2020
1