ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.16176
38
0

Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles

23 November 2023
Luca Scimeca
Alexander Rubinstein
Damien Teney
Seong Joon Oh
A. Nicolicioiu
    SyDa
ArXivPDFHTML
Abstract

Spurious correlations in the data, where multiple cues are predictive of the target labels, often lead to a phenomenon known as shortcut learning, where a model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In this work, we propose DiffDiv an ensemble diversification framework exploiting Diffusion Probabilistic Models (DPMs) to mitigate this form of bias. We show that at particular training intervals, DPMs can generate images with novel feature combinations, even when trained on samples displaying correlated input features. We leverage this crucial property to generate synthetic counterfactuals to increase model diversity via ensemble disagreement. We show that DPM-guided diversification is sufficient to remove dependence on shortcut cues, without a need for additional supervised signals. We further empirically quantify its efficacy on several diversification objectives, and finally show improved generalization and diversification on par with prior work that relies on auxiliary data collection.

View on arXiv
@article{scimeca2025_2311.16176,
  title={ Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles },
  author={ Luca Scimeca and Alexander Rubinstein and Damien Teney and Seong Joon Oh and Yoshua Bengio },
  journal={arXiv preprint arXiv:2311.16176},
  year={ 2025 }
}
Comments on this paper