ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2507.12341
176
0
v1v2 (latest)

Nonlinear Concept Erasure: a Density Matching Approach

16 July 2025
Antoine Saillenfest
Pirmin Lemberger
ArXiv (abs)PDFHTML
Main:7 Pages
10 Figures
Bibliography:1 Pages
11 Tables
Appendix:9 Pages
Abstract

Ensuring that neural models used in real-world applications cannot infer sensitive information, such as demographic attributes like gender or race, from text representations is a critical challenge when fairness is a concern. We address this issue through concept erasure, a process that removes information related to a specific concept from distributed representations while preserving as much of the remaining semantic information as possible. Our approach involves learning an orthogonal projection in the embedding space, designed to make the class-conditional feature distributions of the discrete concept to erase indistinguishable after projection. By adjusting the rank of the projector, we control the extent of information removal, while its orthogonality ensures strict preservation of the local structure of the embeddings. Our method, termed L‾\overline{\mathrm{L}}LEOPARD, achieves state-of-the-art performance in nonlinear erasure of a discrete attribute on classic natural language processing benchmarks. Furthermore, we demonstrate that L‾\overline{\mathrm{L}}LEOPARD effectively mitigates bias in deep nonlinear classifiers, thereby promoting fairness.

View on arXiv
Comments on this paper