A Stackelberg Game Perspective on the Conflict Between Machine Learning and Data Obfuscation
Data is the new oil; this refrain is repeated extensively in the age of internet tracking, machine learning, and data analytics. Social network analysis, cookie-based advertising, and government surveillance are all evidence of the use of data for commercial and national interests. Public pressure, however, is mounting for the protection of privacy. Frameworks such as differential privacy offer machine learning algorithms methods to guarantee limits to information disclosure, but they are seldom implemented. Recently, however, developers have made significant efforts to undermine tracking through obfuscation tools that hide user characteristics in a sea of noise. These services highlight an emerging clash between tracking and data obfuscation. In this paper, we conceptualize this conflict through a dynamic game between users and a machine learning algorithm that uses empirical risk minimization. First, a machine learner declares a privacy protection level, and then users respond by choosing their own perturbation amounts. We study the interaction between the users and the learner using a Stackelberg game. The utility functions quantify accuracy using expected loss and privacy in terms of the bounds of differential privacy. In equilibrium, we find selfish users tend to cause significant utility loss to trackers by perturbing heavily, in a phenomenon reminiscent of public good games. Trackers, however, can improve the balance by proactively perturbing the data themselves. While other work in this area has studied privacy markets and mechanism design for truthful reporting of user information, we take a different viewpoint by considering both user and learner perturbation.
View on arXiv