ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02628
15
0

Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization

3 October 2024
Mikhail Persiianov
Arip Asadulaev
Nikita Andreev
Nikita Starodubcev
Dmitry Baranchuk
Anastasis Kratsios
Evgeny Burnaev
Alexander Korotin
ArXivPDFHTML
Abstract

Learning conditional distributions π∗(⋅∣x)\pi^*(\cdot|x)π∗(⋅∣x) is a central problem in machine learning, which is typically approached via supervised methods with paired data (x,y)∼π∗(x,y) \sim \pi^*(x,y)∼π∗. However, acquiring paired data samples is often challenging, especially in problems such as domain translation. This necessitates the development of semi-supervised\textit{semi-supervised}semi-supervised models that utilize both limited paired data and additional unpaired i.i.d. samples x∼πx∗x \sim \pi^*_xx∼πx∗​ and y∼πy∗y \sim \pi^*_yy∼πy∗​ from the marginal distributions. The usage of such combined data is complex and often relies on heuristic approaches. To tackle this issue, we propose a new learning paradigm that integrates both paired and unpaired data seamlessly\textbf{seamlessly}seamlessly through the data likelihood maximization techniques. We demonstrate that our approach also connects intriguingly with inverse entropic optimal transport (OT). This finding allows us to apply recent advances in computational OT to establish a light\textbf{light}light learning algorithm to get π∗(⋅∣x)\pi^*(\cdot|x)π∗(⋅∣x). Furthermore, we demonstrate through empirical tests that our method effectively learns conditional distributions using paired and unpaired data simultaneously.

View on arXiv
@article{persiianov2025_2410.02628,
  title={ Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization },
  author={ Mikhail Persiianov and Arip Asadulaev and Nikita Andreev and Nikita Starodubcev and Dmitry Baranchuk and Anastasis Kratsios and Evgeny Burnaev and Alexander Korotin },
  journal={arXiv preprint arXiv:2410.02628},
  year={ 2025 }
}
Comments on this paper