ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.03175
28
12

A General Coreset-Based Approach to Diversity Maximization under Matroid Constraints

8 February 2020
Matteo Ceccarello
A. Pietracaprina
G. Pucci
ArXiv (abs)PDFHTML
Abstract

Diversity maximization is a fundamental problem in web search and data mining. For a given dataset SSS of nnn elements, the problem requires to determine a subset of SSS containing k≪nk\ll nk≪n "representatives" which minimize some diversity function expressed in terms of pairwise distances, where distance models dissimilarity. An important variant of the problem prescribes that the solution satisfy an additional orthogonal requirement, which can be specified as a matroid constraint (i.e., a feasible solution must be an independent set of size kkk of a given matroid). While unconstrained diversity maximization admits efficient coreset-based strategies for several diversity functions, known approaches dealing with the additional matroid constraint apply only to one diversity function (sum of distances), and are based on an expensive, inherently sequential, local search over the entire input dataset. We devise the first coreset-based algorithms for diversity maximization under matroid constraints for various diversity functions, together with efficient sequential, MapReduce and Streaming implementations. Technically, our algorithms rely on the construction of a small coreset, that is, a subset of SSS containing a feasible solution which is no more than a factor 1−ϵ1-\epsilon1−ϵ away from the optimal solution for SSS. While our algorithms are fully general, for the partition and transversal matroids, if ϵ\epsilonϵ is a constant in (0,1)(0,1)(0,1) and SSS has bounded doubling dimension, the coreset size is independent of nnn and it is small enough to afford the execution of a slow sequential algorithm to extract a final, accurate, solution in reasonable time. Extensive experiments show that our algorithms are accurate, fast and scalable, and therefore they are capable of dealing with the large input instances typical of the big data scenario.

View on arXiv
Comments on this paper