ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.13954
  4. Cited By
I Prefer not to Say: Protecting User Consent in Models with Optional
  Personal Data

I Prefer not to Say: Protecting User Consent in Models with Optional Personal Data

25 October 2022
Tobias Leemann
Martin Pawelczyk
Christian Thomas Eberle
Gjergji Kasneci
ArXivPDFHTML

Papers citing "I Prefer not to Say: Protecting User Consent in Models with Optional Personal Data"

5 / 5 papers shown
Title
Causal Conceptions of Fairness and their Consequences
Causal Conceptions of Fairness and their Consequences
H. Nilforoshan
Johann D. Gaebler
Ravi Shroff
Sharad Goel
FaML
126
45
0
12 Jul 2022
Achieving Fairness at No Utility Cost via Data Reweighing with Influence
Achieving Fairness at No Utility Cost via Data Reweighing with Influence
Peizhao Li
Hongfu Liu
TDI
25
45
0
01 Feb 2022
Robin Hood and Matthew Effects: Differential Privacy Has Disparate
  Impact on Synthetic Data
Robin Hood and Matthew Effects: Differential Privacy Has Disparate Impact on Synthetic Data
Georgi Ganev
Bristena Oprisanu
Emiliano De Cristofaro
37
57
0
23 Sep 2021
Fairness without Imputation: A Decision Tree Approach for Fair
  Prediction with Missing Values
Fairness without Imputation: A Decision Tree Approach for Fair Prediction with Missing Values
Haewon Jeong
Hao Wang
Flavio du Pin Calmon
FaML
49
33
0
21 Sep 2021
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,082
0
24 Oct 2016
1