ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2402.04489
  4. Cited By
De-amplifying Bias from Differential Privacy in Language Model
  Fine-tuning

De-amplifying Bias from Differential Privacy in Language Model Fine-tuning

7 February 2024
Sanjari Srivastava
Piotr (Peter) Mardziel
Zhikhun Zhang
Archana Ahlawat
Anupam Datta
John C. Mitchell
ArXiv (abs)PDFHTMLGithub

Papers citing "De-amplifying Bias from Differential Privacy in Language Model Fine-tuning"

3 / 3 papers shown
Differentially-private text generation degrades output language quality
Differentially-private text generation degrades output language quality
Erion Cano
Ivan Habernal
SyDa
129
1
0
14 Sep 2025
SoK: What Makes Private Learning Unfair?
SoK: What Makes Private Learning Unfair?
Kai Yao
Marc Juarez
276
1
0
24 Jan 2025
Identifying and Mitigating Privacy Risks Stemming from Language Models:
  A Survey
Identifying and Mitigating Privacy Risks Stemming from Language Models: A Survey
Victoria Smith
Ali Shahin Shamsabadi
Carolyn Ashurst
Adrian Weller
PILM
540
42
0
27 Sep 2023
1
Page 1 of 1