453
v1v2 (latest)

Transfer Learning for Latent Variable Network Models

Abstract

We study transfer learning for estimation in latent variable network models. In our setting, the conditional edge probability matrices given the latent variables are represented by PP for the source and QQ for the target. We wish to estimate QQ given two kinds of data: (1) edge data from a subgraph induced by an o(1)o(1) fraction of the nodes of QQ, and (2) edge data from all of PP. If the source PP has no relation to the target QQ, the estimation error must be Ω(1)\Omega(1). However, we show that if the latent variables are shared, then vanishing error is possible. We give an efficient algorithm that utilizes the ordering of a suitably defined graph distance. Our algorithm achieves o(1)o(1) error and does not assume a parametric form on the source or target networks. Next, for the specific case of Stochastic Block Models we prove a minimax lower bound and show that a simple algorithm achieves this rate. Finally, we empirically demonstrate our algorithm's use on real-world and simulated graph transfer problems.

View on arXiv
Comments on this paper