ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.12931
517
77
v1v2v3 (latest)

Issues with Propagation Based Models for Graph-Level Outlier Detection

Big Data (BD), 2020
23 December 2020
Lingxiao Zhao
Leman Akoglu
ArXiv (abs)PDFHTML
Abstract

Graph-Level Outlier Detection (GLOD) is the task of identifying unusual graphs within a graph database, which received little attention compared to node-level detection in a single graph. As propagation based graph embedding by GNNs and graph kernels achieved promising results on another graph-level task, i.e. graph classification, we study applying those models to tackle GLOD. Instead of developing new models, this paper identifies and delves into a fundamental and intriguing issue with applying propagation based models to GLOD, with evaluation conducted on repurposed binary graph classification datasets where one class is down-sampled as outlier. We find that ROC-AUC performance of the models change significantly (flips from high to low) depending on which class is down-sampled. Interestingly, ROC-AUCs on these two variants approximately sum to 1 and their performance gap is amplified with increasing propagations. We carefully study the graph embedding space produced by propagation based models and find two driving factors: (1) disparity between within-class densities which is amplified by propagation, and (2) overlapping support (mixing of embeddings) across classes. Our study sheds light onto the effects of using graph propagation based models and classification datasets for outlier detection for the first time.

View on arXiv
Comments on this paper