20
0

A Distributional-Lifting Theorem for PAC Learning

Main:28 Pages
7 Figures
Bibliography:5 Pages
Abstract

The apparent difficulty of efficient distribution-free PAC learning has led to a large body of work on distribution-specific learning. Distributional assumptions facilitate the design of efficient algorithms but also limit their reach and relevance. Towards addressing this, we prove a distributional-lifting theorem: This upgrades a learner that succeeds with respect to a limited distribution family D\mathcal{D} to one that succeeds with respect to any distribution DD^\star, with an efficiency overhead that scales with the complexity of expressing DD^\star as a mixture of distributions in D\mathcal{D}.Recent work of Blanc, Lange, Malik, and Tan considered the special case of lifting uniform-distribution learners and designed a lifter that uses a conditional sample oracle for DD^\star, a strong form of access not afforded by the standard PAC model. Their approach, which draws on ideas from semi-supervised learning, first learns DD^\star and then uses this information to lift.We show that their approach is information-theoretically intractable with access only to random examples, thereby giving formal justification for their use of the conditional sample oracle. We then take a different approach that sidesteps the need to learn DD^\star, yielding a lifter that works in the standard PAC model and enjoys additional advantages: it works for all base distribution families, preserves the noise tolerance of learners, has better sample complexity, and is simpler.

View on arXiv
@article{blanc2025_2506.16651,
  title={ A Distributional-Lifting Theorem for PAC Learning },
  author={ Guy Blanc and Jane Lange and Carmen Strassle and Li-Yang Tan },
  journal={arXiv preprint arXiv:2506.16651},
  year={ 2025 }
}
Comments on this paper