ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.12496
15
26

Learning and Testing Junta Distributions with Subcube Conditioning

26 April 2020
Xi Chen
Rajesh Jayaram
Amit Levi
Erik Waingarten
ArXivPDFHTML
Abstract

We study the problems of learning and testing junta distributions on {−1,1}n\{-1,1\}^n{−1,1}n with respect to the uniform distribution, where a distribution ppp is a kkk-junta if its probability mass function p(x)p(x)p(x) depends on a subset of at most kkk variables. The main contribution is an algorithm for finding relevant coordinates in a kkk-junta distribution with subcube conditioning [BC18, CCKLW20]. We give two applications: 1. An algorithm for learning kkk-junta distributions with O~(k/ϵ2)log⁡n+O(2k/ϵ2)\tilde{O}(k/\epsilon^2) \log n + O(2^k/\epsilon^2)O~(k/ϵ2)logn+O(2k/ϵ2) subcube conditioning queries, and 2. An algorithm for testing kkk-junta distributions with O~((k+n)/ϵ2)\tilde{O}((k + \sqrt{n})/\epsilon^2)O~((k+n​)/ϵ2) subcube conditioning queries. All our algorithms are optimal up to poly-logarithmic factors. Our results show that subcube conditioning, as a natural model for accessing high-dimensional distributions, enables significant savings in learning and testing junta distributions compared to the standard sampling model. This addresses an open question posed by Aliakbarpour, Blais, and Rubinfeld [ABR17].

View on arXiv
Comments on this paper