ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2205.03246
15
8

What Makes A Good Fisherman? Linear Regression under Self-Selection Bias

6 May 2022
Yeshwanth Cherapanamjeri
C. Daskalakis
Andrew Ilyas
Manolis Zampetakis
ArXivPDFHTML
Abstract

In the classical setting of self-selection, the goal is to learn kkk models, simultaneously from observations (x(i),y(i))(x^{(i)}, y^{(i)})(x(i),y(i)) where y(i)y^{(i)}y(i) is the output of one of kkk underlying models on input x(i)x^{(i)}x(i). In contrast to mixture models, where we observe the output of a randomly selected model, here the observed model depends on the outputs themselves, and is determined by some known selection criterion. For example, we might observe the highest output, the smallest output, or the median output of the kkk models. In known-index self-selection, the identity of the observed model output is observable; in unknown-index self-selection, it is not. Self-selection has a long history in Econometrics and applications in various theoretical and applied fields, including treatment effect estimation, imitation learning, learning from strategically reported data, and learning from markets at disequilibrium. In this work, we present the first computationally and statistically efficient estimation algorithms for the most standard setting of this problem where the models are linear. In the known-index case, we require poly(1/ε,k,d)(1/\varepsilon, k, d)(1/ε,k,d) sample and time complexity to estimate all model parameters to accuracy ε\varepsilonε in ddd dimensions, and can accommodate quite general selection criteria. In the more challenging unknown-index case, even the identifiability of the linear models (from infinitely many samples) was not known. We show three results in this case for the commonly studied max⁡\maxmax self-selection criterion: (1) we show that the linear models are indeed identifiable, (2) for general kkk we provide an algorithm with poly(d)exp⁡(poly(k))(d) \exp(\text{poly}(k))(d)exp(poly(k)) sample and time complexity to estimate the regression parameters up to error 1/poly(k)1/\text{poly}(k)1/poly(k), and (3) for k=2k = 2k=2 we provide an algorithm for any error ε\varepsilonε and poly(d,1/ε)(d, 1/\varepsilon)(d,1/ε) sample and time complexity.

View on arXiv
Comments on this paper