ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.08619
38
200

Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms

22 August 2019
Ruoxi Jia
David Dao
Wei Ping
F. Hubis
Nezihe Merve Gürel
Yue Liu
Ce Zhang
C. Spanos
D. Song
    TDI
    FedML
ArXivPDFHTML
Abstract

Given a data set D\mathcal{D}D containing millions of data points and a data consumer who is willing to pay for \Xtotrainamachinelearning(ML)modelover to train a machine learning (ML) model over totrainamachinelearning(ML)modelover\mathcal{D}, how should we distribute this \XXX to each data point to reflect its "value"? In this paper, we define the "relative value of data" via the Shapley value, as it uniquely possesses properties with appealing real-world interpretations, such as fairness, rationality and decentralizability. For general, bounded utility functions, the Shapley value is known to be challenging to compute: to get Shapley values for all NNN data points, it requires O(2N)O(2^N)O(2N) model evaluations for exact computation and O(Nlog⁡N)O(N\log N)O(NlogN) for (ϵ,δ)(\epsilon, \delta)(ϵ,δ)-approximation. In this paper, we focus on one popular family of ML models relying on KKK-nearest neighbors (KKKNN). The most surprising result is that for unweighted KKKNN classifiers and regressors, the Shapley value of all NNN data points can be computed, exactly, in O(Nlog⁡N)O(N\log N)O(NlogN) time -- an exponential improvement on computational complexity! Moreover, for (ϵ,δ)(\epsilon, \delta)(ϵ,δ)-approximation, we are able to develop an algorithm based on Locality Sensitive Hashing (LSH) with only sublinear complexity O(Nh(ϵ,K)log⁡N)O(N^{h(\epsilon,K)}\log N)O(Nh(ϵ,K)logN) when ϵ\epsilonϵ is not too small and KKK is not too large. We empirically evaluate our algorithms on up to 101010 million data points and even our exact algorithm is up to three orders of magnitude faster than the baseline approximation algorithm. The LSH-based approximation algorithm can accelerate the value calculation process even further. We then extend our algorithms to other scenarios such as (1) weighed KKKNN classifiers, (2) different data points are clustered by different data curators, and (3) there are data analysts providing computation who also requires proper valuation.

View on arXiv
Comments on this paper