A driving force behind the diverse applicability of modern machine learning is the ability to extract meaningful features across many sources. However, many practical domains involve data that are non-identically distributed across sources, and statistically dependent within its source, violating vital assumptions in existing theoretical studies. Toward addressing these issues, we establish statistical guarantees for learning general representations from multiple data sources that admit different input distributions and possibly dependent data. Specifically, we study the sample-complexity of learning functions from a function class , where are task specific linear functions and is a shared nonlinear representation. A representation is estimated using samples from each of source tasks, and a fine-tuning function is fit using samples from a target task passed through . We show that when , the excess risk of on the target task decays as , where denotes the effect of data dependency, denotes an (estimatable) measure of between the source and target tasks, and denotes the complexity of the representation class . In particular, our analysis reveals: as the number of tasks increases, both the sample requirement and risk bound converge to that of -dimensional regression as if had been given, and the effect of dependency only enters the sample requirement, leaving the risk bound matching the iid setting.
View on arXiv