55
0

GiGL: Large-Scale Graph Neural Networks at Snapchat

Abstract

Recent advances in graph machine learning (ML) with the introduction of Graph Neural Networks (GNNs) have led to a widespread interest in applying these approaches to business applications at scale. GNNs enable differentiable end-to-end (E2E) learning of model parameters given graph structure which enables optimization towards popular node, edge (link) and graph-level tasks. While the research innovation in new GNN layers and training strategies has been rapid, industrial adoption and utility of GNNs has lagged considerably due to the unique scale challenges that large-scale graph ML problems create. In this work, we share our approach to training, inference, and utilization of GNNs at Snapchat. To this end, we present GiGL (Gigantic Graph Learning), an open-source library to enable large-scale distributed graph ML to the benefit of researchers, ML engineers, and practitioners. We use GiGL internally at Snapchat to manage the heavy lifting of GNN workflows, including graph data preprocessing from relational DBs, subgraph sampling, distributed training, inference, and orchestration. GiGL is designed to interface cleanly with open-source GNN modeling libraries prominent in academia like PyTorch Geometric (PyG), while handling scaling and productionization challenges that make it easier for internal practitioners to focus on modeling. GiGL is used in multiple production settings, and has powered over 35 launches across multiple business domains in the last 2 years in the contexts of friend recommendation, content recommendation and advertising. This work details high-level design and tools the library provides, scaling properties, case studies in diverse business settings with industry-scale graphs, and several key lessons learned in employing graph ML at scale on large social data. GiGL is open-sourced atthis https URL.

View on arXiv
@article{zhao2025_2502.15054,
  title={ GiGL: Large-Scale Graph Neural Networks at Snapchat },
  author={ Tong Zhao and Yozen Liu and Matthew Kolodner and Kyle Montemayor and Elham Ghazizadeh and Ankit Batra and Zihao Fan and Xiaobin Gao and Xuan Guo and Jiwen Ren and Serim Park and Peicheng Yu and Jun Yu and Shubham Vij and Neil Shah },
  journal={arXiv preprint arXiv:2502.15054},
  year={ 2025 }
}
Comments on this paper