ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1912.07820
8
40

Balancing the Tradeoff Between Clustering Value and Interpretability

17 December 2019
Sandhya Saisubramanian
Sainyam Galhotra
S. Zilberstein
ArXivPDFHTML
Abstract

Graph clustering groups entities -- the vertices of a graph -- based on their similarity, typically using a complex distance function over a large number of features. Successful integration of clustering approaches in automated decision-support systems hinges on the interpretability of the resulting clusters. This paper addresses the problem of generating interpretable clusters, given features of interest that signify interpretability to an end-user, by optimizing interpretability in addition to common clustering objectives. We propose a β\betaβ-interpretable clustering algorithm that ensures that at least β\betaβ fraction of nodes in each cluster share the same feature value. The tunable parameter β\betaβ is user-specified. We also present a more efficient algorithm for scenarios with β ⁣= ⁣1\beta\!=\!1β=1 and analyze the theoretical guarantees of the two algorithms. Finally, we empirically demonstrate the benefits of our approaches in generating interpretable clusters using four real-world datasets. The interpretability of the clusters is complemented by generating simple explanations denoting the feature values of the nodes in the clusters, using frequent pattern mining.

View on arXiv
Comments on this paper