ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.07351
11
0

From Search To Sampling: Generative Models For Robust Algorithmic Recourse

12 May 2025
Prateek Garg
Lokesh Nagalapatti
Sunita Sarawagi
ArXivPDFHTML
Abstract

Algorithmic Recourse provides recommendations to individuals who are adversely impacted by automated model decisions, on how to alter their profiles to achieve a favorable outcome. Effective recourse methods must balance three conflicting goals: proximity to the original profile to minimize cost, plausibility for realistic recourse, and validity to ensure the desired outcome. We show that existing methods train for these objectives separately and then search for recourse through a joint optimization over the recourse goals during inference, leading to poor recourse recommendations. We introduce GenRe, a generative recourse model designed to train the three recourse objectives jointly. Training such generative models is non-trivial due to lack of direct recourse supervision. We propose efficient ways to synthesize such supervision and further show that GenRe's training leads to a consistent estimator. Unlike most prior methods, that employ non-robust gradient descent based search during inference, GenRe simply performs a forward sampling over the generative model to produce minimum cost recourse, leading to superior performance across multiple metrics. We also demonstrate GenRe provides the best trade-off between cost, plausibility and validity, compared to state-of-art baselines. Our code is available at:this https URL.

View on arXiv
@article{garg2025_2505.07351,
  title={ From Search To Sampling: Generative Models For Robust Algorithmic Recourse },
  author={ Prateek Garg and Lokesh Nagalapatti and Sunita Sarawagi },
  journal={arXiv preprint arXiv:2505.07351},
  year={ 2025 }
}
Comments on this paper