ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.01389
31
1

Invariant Risk Minimization Is A Total Variation Model

2 May 2024
Zhao-Rong Lai
Wei-Wen Wang
    OOD
ArXivPDFHTML
Abstract

Invariant risk minimization (IRM) is an arising approach to generalize invariant features to different environments in machine learning. While most related works focus on new IRM settings or new application scenarios, the mathematical essence of IRM remains to be properly explained. We verify that IRM is essentially a total variation based on L2L^2L2 norm (TV-ℓ2\ell_2ℓ2​) of the learning risk with respect to the classifier variable. Moreover, we propose a novel IRM framework based on the TV-ℓ1\ell_1ℓ1​ model. It not only expands the classes of functions that can be used as the learning risk and the feature extractor, but also has robust performance in denoising and invariant feature preservation based on the coarea formula. We also illustrate some requirements for IRM-TV-ℓ1\ell_1ℓ1​ to achieve out-of-distribution generalization. Experimental results show that the proposed framework achieves competitive performance in several benchmark machine learning scenarios.

View on arXiv
Comments on this paper