ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.03437
23
1

Transfer Learning for Latent Variable Network Models

5 June 2024
Akhil Jalan
Arya Mazumdar
Soumendu Sundar Mukherjee
Purnamrita Sarkar
ArXivPDFHTML
Abstract

We study transfer learning for estimation in latent variable network models. In our setting, the conditional edge probability matrices given the latent variables are represented by PPP for the source and QQQ for the target. We wish to estimate QQQ given two kinds of data: (1) edge data from a subgraph induced by an o(1)o(1)o(1) fraction of the nodes of QQQ, and (2) edge data from all of PPP. If the source PPP has no relation to the target QQQ, the estimation error must be Ω(1)\Omega(1)Ω(1). However, we show that if the latent variables are shared, then vanishing error is possible. We give an efficient algorithm that utilizes the ordering of a suitably defined graph distance. Our algorithm achieves o(1)o(1)o(1) error and does not assume a parametric form on the source or target networks. Next, for the specific case of Stochastic Block Models we prove a minimax lower bound and show that a simple algorithm achieves this rate. Finally, we empirically demonstrate our algorithm's use on real-world and simulated graph transfer problems.

View on arXiv
Comments on this paper