ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.04840
  4. Cited By
CryptoCredit: Securely Training Fair Models

CryptoCredit: Securely Training Fair Models

9 October 2020
Leo de Castro
Jiahao Chen
Antigoni Polychroniadou
ArXivPDFHTML

Papers citing "CryptoCredit: Securely Training Fair Models"

3 / 3 papers shown
Title
Can Querying for Bias Leak Protected Attributes? Achieving Privacy With
  Smooth Sensitivity
Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity
Faisal Hamman
Jiahao Chen
Sanghamitra Dutta
17
9
0
03 Nov 2022
A Survey on Bias and Fairness in Machine Learning
A Survey on Bias and Fairness in Machine Learning
Ninareh Mehrabi
Fred Morstatter
N. Saxena
Kristina Lerman
Aram Galstyan
SyDa
FaML
294
4,187
0
23 Aug 2019
Fair prediction with disparate impact: A study of bias in recidivism
  prediction instruments
Fair prediction with disparate impact: A study of bias in recidivism prediction instruments
Alexandra Chouldechova
FaML
185
2,082
0
24 Oct 2016
1