ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.04579
23
21

Learning from Conditional Distributions via Dual Embeddings

15 July 2016
Bo Dai
Niao He
Yunpeng Pan
Byron Boots
Le Song
ArXivPDFHTML
Abstract

Many machine learning tasks, such as learning with invariance and policy evaluation in reinforcement learning, can be characterized as problems of learning from conditional distributions. In such problems, each sample xxx itself is associated with a conditional distribution p(z∣x)p(z|x)p(z∣x) represented by samples {zi}i=1M\{z_i\}_{i=1}^M{zi​}i=1M​, and the goal is to learn a function fff that links these conditional distributions to target values yyy. These learning problems become very challenging when we only have limited samples or in the extreme case only one sample from each conditional distribution. Commonly used approaches either assume that zzz is independent of xxx, or require an overwhelmingly large samples from each conditional distribution. To address these challenges, we propose a novel approach which employs a new min-max reformulation of the learning from conditional distribution problem. With such new reformulation, we only need to deal with the joint distribution p(z,x)p(z,x)p(z,x). We also design an efficient learning algorithm, Embedding-SGD, and establish theoretical sample complexity for such problems. Finally, our numerical experiments on both synthetic and real-world datasets show that the proposed approach can significantly improve over the existing algorithms.

View on arXiv
Comments on this paper